* [PATCH V3 00/17] perf tools and x86 PTI entry trampolines
@ 2018-05-22 10:54 Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 01/17] kallsyms: Simplify update_iter_mod() Adrian Hunter
` (18 more replies)
0 siblings, 19 replies; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Hi
Here is V3 of patches to support x86 PTI entry trampolines in perf tools.
Patches also here:
http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti-v3
git://git.infradead.org/users/ahunter/linux-perf.git perf-tools-kpti-v3
V2 patches also here:
http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti-v2
git://git.infradead.org/users/ahunter/linux-perf.git perf-tools-kpti-v2
V1 patches also here:
http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti-v1
git://git.infradead.org/users/ahunter/linux-perf.git perf-tools-kpti-v1
Changes Since V2:
x86: Add entry trampolines to kcore
x86: kcore: Give entry trampolines all the same offset in kcore
Combined into a single patch
Added comment
Expand commit message
perf tools: Add machine__is() to identify machine arch
Dropped because it has been applied
perf tools: Fix kernel_start for PTI on x86
Dropped because it has been applied
Changes Since V1:
perf tools: Use the _stest symbol to identify the kernel map when loading kcore
Dropped because it has been applied
perf tools: Add machine__is() to identify machine arch
New patch
perf tools: Fix kernel_start for PTI on x86
Moved definition of machine__is() to a separate patch
perf tools: Add machine__nr_cpus_avail()
New patch
perf tools: Workaround missing maps for x86 PTI entry trampolines
Use machine__nr_cpus_avail()
perf tools: Create maps for x86 PTI entry trampolines
Re-based
Changes Since RFC:
Change description 'x86_64 KPTI' to 'x86 PTI'
Rename 'special' kernel map to 'extra' kernel map etc
kallsyms: Simplify update_iter_mod()
Expand commit message
perf tools: Fix kernel_start for PTI on x86
Amend machine__is() to check if machine is NULL
perf tools: Workaround missing maps for x86 PTI entry trampolines
Simplify find_entry_trampoline()
Add comment before struct extra_kernel_map /* Kernel-space
maps for symbols that are outside the main kernel map and
module maps */
perf tools: Create maps for x86 PTI entry trampolines
Move code presently only used by x86_64 into arch
perf tools: Synthesize and process mmap events for x86 PTI entry
trampolines
Fix spelling 'kernal' -> 'kernel'
Rename 'special' kernel map to 'extra' kernel map etc
Move code presently only used by x86_64 into arch
perf buildid-cache: kcore_copy: Keep phdr data in a list
Expand commit message
Rename 'list' -> 'node'
perf buildid-cache: kcore_copy: Get rid of kernel_map
Expand commit message
Add phdr_data__new()
Rename 'kcore_copy__new_phdr' -> 'kcore_copy_info__addnew'
Original Cover email:
Perf tools do not know about x86 PTI entry trampolines - see example
below. These patches add a workaround, namely "perf tools: Workaround
missing maps for x86 PTI entry trampolines", which has the limitation
that it hard codes the addresses. Note that the workaround will work for
old kernels and old perf.data files, but not for future kernels if the
trampoline addresses are ever changed.
At present, perf tools uses /proc/kallsyms to construct a memory map for
the kernel. Recording such a map in the perf.data file is necessary to
deal with kernel relocation and KASLR.
While it is reasonable on its own terms, to add symbols for the trampolines
to /proc/kallsyms, the motivation here is to have perf tools use them to
create memory maps in the same fashion as is done for the kernel text.
So the first 2 patches add symbols to /proc/kallsyms for the trampolines:
kallsyms: Simplify update_iter_mod()
kallsyms, x86: Export addresses of syscall trampolines
perf tools have the ability to use /proc/kcore (in conjunction with
/proc/kallsyms) as the kernel image. So the next 2 patches add program
headers for the trampolines to the kcore ELF:
x86: Add entry trampolines to kcore
x86: kcore: Give entry trampolines all the same offset in kcore
It is worth noting that, with the kcore changes alone, perf tools require
no changes to recognise the trampolines when using /proc/kcore.
Similarly, if perf tools are used with a matching kallsyms only (by denying
access to /proc/kcore or a vmlinux image), then the kallsyms patches are
sufficient to recognise the trampolines with no changes needed to the
tools.
However, in the general case, when using vmlinux or dealing with
relocations, perf tools needs memory maps for the trampolines. Because the
kernel text map is constructed as a special case, using the same approach
for the trampolines means treating them as a special case also, which
requires a number of changes to perf tools, and the remaining patches deal
with that.
Example: make a program that does lots of small syscalls e.g.
$ cat uname_x_n.c
#include <sys/utsname.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
long n = argc > 1 ? strtol(argv[1], NULL, 0) : 0;
struct utsname u;
while (n--)
uname(&u);
return 0;
}
and then:
sudo perf record uname_x_n 100000
sudo perf report --stdio
Before the changes, there are unknown symbols:
# Overhead Command Shared Object Symbol
# ........ ......... ................ ..................................
#
41.91% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
19.22% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
18.70% uname_x_n [unknown] [k] 0xfffffe00000e201b
4.09% uname_x_n libc-2.19.so [.] __GI___uname
3.08% uname_x_n [kernel.vmlinux] [k] do_syscall_64
3.02% uname_x_n [unknown] [k] 0xfffffe00000e2025
2.32% uname_x_n [kernel.vmlinux] [k] down_read
2.27% uname_x_n ld-2.19.so [.] _dl_start
1.97% uname_x_n [unknown] [k] 0xfffffe00000e201e
1.25% uname_x_n [kernel.vmlinux] [k] up_read
1.02% uname_x_n [unknown] [k] 0xfffffe00000e200c
0.99% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
0.16% uname_x_n [kernel.vmlinux] [k] flush_signal_handlers
0.01% perf [kernel.vmlinux] [k] native_sched_clock
0.00% perf [kernel.vmlinux] [k] native_write_msr
After the changes there are not:
# Overhead Command Shared Object Symbol
# ........ ......... ................ ..................................
#
41.91% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
24.70% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64_trampoline
19.22% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
4.09% uname_x_n libc-2.19.so [.] __GI___uname
3.08% uname_x_n [kernel.vmlinux] [k] do_syscall_64
2.32% uname_x_n [kernel.vmlinux] [k] down_read
2.27% uname_x_n ld-2.19.so [.] _dl_start
1.25% uname_x_n [kernel.vmlinux] [k] up_read
0.99% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
0.16% uname_x_n [kernel.vmlinux] [k] flush_signal_handlers
0.01% perf [kernel.vmlinux] [k] native_sched_clock
0.00% perf [kernel.vmlinux] [k] native_write_msr
Adrian Hunter (16):
kallsyms: Simplify update_iter_mod()
x86: Add entry trampolines to kcore
perf tools: Add machine__nr_cpus_avail()
perf tools: Workaround missing maps for x86 PTI entry trampolines
perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols
perf tools: Allow for extra kernel maps
perf tools: Create maps for x86 PTI entry trampolines
perf tools: Synthesize and process mmap events for x86 PTI entry trampolines
perf buildid-cache: kcore_copy: Keep phdr data in a list
perf buildid-cache: kcore_copy: Keep a count of phdrs
perf buildid-cache: kcore_copy: Calculate offset from phnum
perf buildid-cache: kcore_copy: Layout sections
perf buildid-cache: kcore_copy: Iterate phdrs
perf buildid-cache: kcore_copy: Get rid of kernel_map
perf buildid-cache: kcore_copy: Copy x86 PTI entry trampoline sections
perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text
Alexander Shishkin (1):
kallsyms, x86: Export addresses of syscall trampolines
arch/x86/mm/cpu_entry_area.c | 33 ++++++
fs/proc/kcore.c | 7 +-
include/linux/kcore.h | 13 +++
kernel/kallsyms.c | 46 +++++---
tools/perf/arch/x86/util/Build | 2 +
tools/perf/arch/x86/util/event.c | 76 +++++++++++++
tools/perf/arch/x86/util/machine.c | 103 +++++++++++++++++
tools/perf/util/env.c | 13 +++
tools/perf/util/env.h | 1 +
tools/perf/util/event.c | 36 ++++--
tools/perf/util/event.h | 8 ++
tools/perf/util/machine.c | 175 +++++++++++++++++++++++++++--
tools/perf/util/machine.h | 23 ++++
tools/perf/util/map.c | 22 +++-
tools/perf/util/map.h | 15 ++-
tools/perf/util/symbol-elf.c | 219 +++++++++++++++++++++++++++++++------
tools/perf/util/symbol.c | 49 +++++++--
17 files changed, 762 insertions(+), 79 deletions(-)
create mode 100644 tools/perf/arch/x86/util/event.c
create mode 100644 tools/perf/arch/x86/util/machine.c
Regards
Adrian
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH V3 01/17] kallsyms: Simplify update_iter_mod()
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines Adrian Hunter
` (17 subsequent siblings)
18 siblings, 0 replies; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
The logic in update_iter_mod() is overcomplicated and gets worse every time
another get_ksymbol_* function is added.
In preparation for adding another get_ksymbol_* function, simplify logic in
update_iter_mod().
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
kernel/kallsyms.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
index a23e21ada81b..eda4b0222dab 100644
--- a/kernel/kallsyms.c
+++ b/kernel/kallsyms.c
@@ -510,23 +510,15 @@ static int update_iter_mod(struct kallsym_iter *iter, loff_t pos)
{
iter->pos = pos;
- if (iter->pos_ftrace_mod_end > 0 &&
- iter->pos_ftrace_mod_end < iter->pos)
- return get_ksymbol_bpf(iter);
-
- if (iter->pos_mod_end > 0 &&
- iter->pos_mod_end < iter->pos) {
- if (!get_ksymbol_ftrace_mod(iter))
- return get_ksymbol_bpf(iter);
+ if ((!iter->pos_mod_end || iter->pos_mod_end > pos) &&
+ get_ksymbol_mod(iter))
return 1;
- }
- if (!get_ksymbol_mod(iter)) {
- if (!get_ksymbol_ftrace_mod(iter))
- return get_ksymbol_bpf(iter);
- }
+ if ((!iter->pos_ftrace_mod_end || iter->pos_ftrace_mod_end > pos) &&
+ get_ksymbol_ftrace_mod(iter))
+ return 1;
- return 1;
+ return get_ksymbol_bpf(iter);
}
/* Returns false if pos at or past end of file. */
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 01/17] kallsyms: Simplify update_iter_mod() Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-06-05 16:00 ` Andi Kleen
2018-05-22 10:54 ` [PATCH V3 03/17] x86: Add entry trampolines to kcore Adrian Hunter
` (16 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
From: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
---
arch/x86/mm/cpu_entry_area.c | 23 +++++++++++++++++++++++
kernel/kallsyms.c | 28 +++++++++++++++++++++++++++-
2 files changed, 50 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index b45f5aaefd74..d1da5cf4b2de 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -2,6 +2,7 @@
#include <linux/spinlock.h>
#include <linux/percpu.h>
+#include <linux/kallsyms.h>
#include <asm/cpu_entry_area.h>
#include <asm/pgtable.h>
@@ -150,6 +151,28 @@ static void __init setup_cpu_entry_area(int cpu)
percpu_setup_debug_store(cpu);
}
+#ifdef CONFIG_X86_64
+int arch_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
+ char *name)
+{
+ unsigned int cpu, ncpu;
+
+ if (symnum >= num_possible_cpus())
+ return -EINVAL;
+
+ for (cpu = cpumask_first(cpu_possible_mask), ncpu = 0;
+ cpu < num_possible_cpus() && ncpu < symnum;
+ cpu = cpumask_next(cpu, cpu_possible_mask), ncpu++)
+ ;
+
+ *value = (unsigned long)&get_cpu_entry_area(cpu)->entry_trampoline;
+ *type = 't';
+ strlcpy(name, "__entry_SYSCALL_64_trampoline", KSYM_NAME_LEN);
+
+ return 0;
+}
+#endif
+
static __init void setup_cpu_entry_area_ptes(void)
{
#ifdef CONFIG_X86_32
diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
index eda4b0222dab..ebe6befac47e 100644
--- a/kernel/kallsyms.c
+++ b/kernel/kallsyms.c
@@ -432,6 +432,7 @@ int sprint_backtrace(char *buffer, unsigned long address)
/* To avoid using get_symbol_offset for every symbol, we carry prefix along. */
struct kallsym_iter {
loff_t pos;
+ loff_t pos_arch_end;
loff_t pos_mod_end;
loff_t pos_ftrace_mod_end;
unsigned long value;
@@ -443,9 +444,29 @@ struct kallsym_iter {
int show_value;
};
+int __weak arch_get_kallsym(unsigned int symnum, unsigned long *value,
+ char *type, char *name)
+{
+ return -EINVAL;
+}
+
+static int get_ksymbol_arch(struct kallsym_iter *iter)
+{
+ int ret = arch_get_kallsym(iter->pos - kallsyms_num_syms,
+ &iter->value, &iter->type,
+ iter->name);
+
+ if (ret < 0) {
+ iter->pos_arch_end = iter->pos;
+ return 0;
+ }
+
+ return 1;
+}
+
static int get_ksymbol_mod(struct kallsym_iter *iter)
{
- int ret = module_get_kallsym(iter->pos - kallsyms_num_syms,
+ int ret = module_get_kallsym(iter->pos - iter->pos_arch_end,
&iter->value, &iter->type,
iter->name, iter->module_name,
&iter->exported);
@@ -501,6 +522,7 @@ static void reset_iter(struct kallsym_iter *iter, loff_t new_pos)
iter->nameoff = get_symbol_offset(new_pos);
iter->pos = new_pos;
if (new_pos == 0) {
+ iter->pos_arch_end = 0;
iter->pos_mod_end = 0;
iter->pos_ftrace_mod_end = 0;
}
@@ -510,6 +532,10 @@ static int update_iter_mod(struct kallsym_iter *iter, loff_t pos)
{
iter->pos = pos;
+ if ((!iter->pos_arch_end || iter->pos_arch_end > pos) &&
+ get_ksymbol_arch(iter))
+ return 1;
+
if ((!iter->pos_mod_end || iter->pos_mod_end > pos) &&
get_ksymbol_mod(iter))
return 1;
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 03/17] x86: Add entry trampolines to kcore
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 01/17] kallsyms: Simplify update_iter_mod() Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 04/17] perf tools: Add machine__nr_cpus_avail() Adrian Hunter
` (15 subsequent siblings)
18 siblings, 0 replies; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Without program headers for PTI entry trampoline pages, the trampoline
virtual addresses do not map to anything.
Example before:
sudo gdb --quiet vmlinux /proc/kcore
Reading symbols from vmlinux...done.
[New process 1]
Core was generated by `BOOT_IMAGE=/boot/vmlinuz-4.16.0 root=UUID=a6096b83-b763-4101-807e-f33daff63233'.
#0 0x0000000000000000 in irq_stack_union ()
(gdb) x /21ib 0xfffffe0000006000
0xfffffe0000006000: Cannot access memory at address 0xfffffe0000006000
(gdb) quit
After:
sudo gdb --quiet vmlinux /proc/kcore
[sudo] password for ahunter:
Reading symbols from vmlinux...done.
[New process 1]
Core was generated by `BOOT_IMAGE=/boot/vmlinuz-4.16.0-fix-4-00005-gd6e65a8b4072 root=UUID=a6096b83-b7'.
#0 0x0000000000000000 in irq_stack_union ()
(gdb) x /21ib 0xfffffe0000006000
0xfffffe0000006000: swapgs
0xfffffe0000006003: mov %rsp,-0x3e12(%rip) # 0xfffffe00000021f8
0xfffffe000000600a: xchg %ax,%ax
0xfffffe000000600c: mov %cr3,%rsp
0xfffffe000000600f: bts $0x3f,%rsp
0xfffffe0000006014: and $0xffffffffffffe7ff,%rsp
0xfffffe000000601b: mov %rsp,%cr3
0xfffffe000000601e: mov -0x3019(%rip),%rsp # 0xfffffe000000300c
0xfffffe0000006025: pushq $0x2b
0xfffffe0000006027: pushq -0x3e35(%rip) # 0xfffffe00000021f8
0xfffffe000000602d: push %r11
0xfffffe000000602f: pushq $0x33
0xfffffe0000006031: push %rcx
0xfffffe0000006032: push %rdi
0xfffffe0000006033: mov $0xffffffff91a00010,%rdi
0xfffffe000000603a: callq 0xfffffe0000006046
0xfffffe000000603f: pause
0xfffffe0000006041: lfence
0xfffffe0000006044: jmp 0xfffffe000000603f
0xfffffe0000006046: mov %rdi,(%rsp)
0xfffffe000000604a: retq
(gdb) quit
In addition, entry trampolines all map to the same page. Represent that by
giving the corresponding program headers in kcore the same offset.
This has the benefit that, when perf tools uses /proc/kcore as a source for
kernel object code, samples from different CPU trampolines are aggregated
together. Note, such aggregation is normal for profiling i.e. people want
to profile the object code, not every different virtual address the object
code might be mapped to (across different processes for example).
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
arch/x86/mm/cpu_entry_area.c | 10 ++++++++++
fs/proc/kcore.c | 7 +++++--
include/linux/kcore.h | 13 +++++++++++++
3 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index d1da5cf4b2de..c727a2fbe613 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -3,6 +3,7 @@
#include <linux/spinlock.h>
#include <linux/percpu.h>
#include <linux/kallsyms.h>
+#include <linux/kcore.h>
#include <asm/cpu_entry_area.h>
#include <asm/pgtable.h>
@@ -14,6 +15,7 @@
#ifdef CONFIG_X86_64
static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
+static DEFINE_PER_CPU(struct kcore_list, kcore_entry_trampoline);
#endif
struct cpu_entry_area *get_cpu_entry_area(int cpu)
@@ -147,6 +149,14 @@ static void __init setup_cpu_entry_area(int cpu)
cea_set_pte(&get_cpu_entry_area(cpu)->entry_trampoline,
__pa_symbol(_entry_trampoline), PAGE_KERNEL_RX);
+ /*
+ * The cpu_entry_area alias addresses are not in the kernel binary
+ * so they do not show up in /proc/kcore normally. This adds entries
+ * for them manually.
+ */
+ kclist_add_remap(&per_cpu(kcore_entry_trampoline, cpu),
+ _entry_trampoline,
+ &get_cpu_entry_area(cpu)->entry_trampoline, PAGE_SIZE);
#endif
percpu_setup_debug_store(cpu);
}
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
index e64ecb9f2720..00282f134336 100644
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -383,8 +383,11 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
phdr->p_type = PT_LOAD;
phdr->p_flags = PF_R|PF_W|PF_X;
phdr->p_offset = kc_vaddr_to_offset(m->addr) + dataoff;
- phdr->p_vaddr = (size_t)m->addr;
- if (m->type == KCORE_RAM || m->type == KCORE_TEXT)
+ if (m->type == KCORE_REMAP)
+ phdr->p_vaddr = (size_t)m->vaddr;
+ else
+ phdr->p_vaddr = (size_t)m->addr;
+ if (m->type == KCORE_RAM || m->type == KCORE_TEXT || m->type == KCORE_REMAP)
phdr->p_paddr = __pa(m->addr);
else
phdr->p_paddr = (elf_addr_t)-1;
diff --git a/include/linux/kcore.h b/include/linux/kcore.h
index 80db19d3a505..3a11ce51e137 100644
--- a/include/linux/kcore.h
+++ b/include/linux/kcore.h
@@ -12,11 +12,13 @@ enum kcore_type {
KCORE_VMEMMAP,
KCORE_USER,
KCORE_OTHER,
+ KCORE_REMAP,
};
struct kcore_list {
struct list_head list;
unsigned long addr;
+ unsigned long vaddr;
size_t size;
int type;
};
@@ -30,11 +32,22 @@ struct vmcore {
#ifdef CONFIG_PROC_KCORE
extern void kclist_add(struct kcore_list *, void *, size_t, int type);
+static inline
+void kclist_add_remap(struct kcore_list *m, void *addr, void *vaddr, size_t sz)
+{
+ m->vaddr = (unsigned long)vaddr;
+ kclist_add(m, addr, sz, KCORE_REMAP);
+}
#else
static inline
void kclist_add(struct kcore_list *new, void *addr, size_t size, int type)
{
}
+
+static inline
+void kclist_add_remap(struct kcore_list *m, void *addr, void *vaddr, size_t sz)
+{
+}
#endif
#endif /* _LINUX_KCORE_H */
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 04/17] perf tools: Add machine__nr_cpus_avail()
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (2 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 03/17] x86: Add entry trampolines to kcore Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:38 ` [tip:perf/core] perf machine: Add nr_cpus_avail() tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 05/17] perf tools: Workaround missing maps for x86 PTI entry trampolines Adrian Hunter
` (14 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Add a function to return the number of the machine's available CPUs.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/env.c | 13 +++++++++++++
tools/perf/util/env.h | 1 +
tools/perf/util/machine.c | 5 +++++
tools/perf/util/machine.h | 1 +
4 files changed, 20 insertions(+)
diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
index 319fb0a0d05e..59f38c7693f8 100644
--- a/tools/perf/util/env.c
+++ b/tools/perf/util/env.c
@@ -106,11 +106,24 @@ static int perf_env__read_arch(struct perf_env *env)
return env->arch ? 0 : -ENOMEM;
}
+static int perf_env__read_nr_cpus_avail(struct perf_env *env)
+{
+ if (env->nr_cpus_avail == 0)
+ env->nr_cpus_avail = cpu__max_present_cpu();
+
+ return env->nr_cpus_avail ? 0 : -ENOENT;
+}
+
const char *perf_env__raw_arch(struct perf_env *env)
{
return env && !perf_env__read_arch(env) ? env->arch : "unknown";
}
+int perf_env__nr_cpus_avail(struct perf_env *env)
+{
+ return env && !perf_env__read_nr_cpus_avail(env) ? env->nr_cpus_avail : 0;
+}
+
void cpu_cache_level__free(struct cpu_cache_level *cache)
{
free(cache->type);
diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
index 62e193948608..1f3ccc368530 100644
--- a/tools/perf/util/env.h
+++ b/tools/perf/util/env.h
@@ -77,5 +77,6 @@ struct perf_env {
const char *perf_env__arch(struct perf_env *env);
const char *perf_env__raw_arch(struct perf_env *env);
+int perf_env__nr_cpus_avail(struct perf_env *env);
#endif /* __PERF_ENV_H */
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index e011a7160380..f62ecd9c36e8 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -2305,6 +2305,11 @@ bool machine__is(struct machine *machine, const char *arch)
return machine && !strcmp(perf_env__raw_arch(machine->env), arch);
}
+int machine__nr_cpus_avail(struct machine *machine)
+{
+ return machine ? perf_env__nr_cpus_avail(machine->env) : 0;
+}
+
int machine__get_kernel_start(struct machine *machine)
{
struct map *map = machine__kernel_map(machine);
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index b31d33b5aa2a..2d2b092ba753 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -189,6 +189,7 @@ static inline bool machine__is_host(struct machine *machine)
}
bool machine__is(struct machine *machine, const char *arch);
+int machine__nr_cpus_avail(struct machine *machine);
struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
struct thread *machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 05/17] perf tools: Workaround missing maps for x86 PTI entry trampolines
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (3 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 04/17] perf tools: Add machine__nr_cpus_avail() Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:38 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 06/17] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols Adrian Hunter
` (13 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
On x86_64 the PTI entry trampolines are not in the kernel map created by
perf tools. That results in the addresses having no symbols and prevents
annotation. It also causes Intel PT to have decoding errors at the
trampoline addresses. Workaround that by creating maps for the trampolines.
At present the kernel does not export information revealing where the
trampolines are. Until that happens, the addresses are hardcoded.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/machine.c | 96 +++++++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/machine.h | 3 ++
tools/perf/util/symbol.c | 12 +++---
3 files changed, 106 insertions(+), 5 deletions(-)
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index f62ecd9c36e8..db695603873b 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -851,6 +851,102 @@ static int machine__get_running_kernel_start(struct machine *machine,
return 0;
}
+/* Kernel-space maps for symbols that are outside the main kernel map and module maps */
+struct extra_kernel_map {
+ u64 start;
+ u64 end;
+ u64 pgoff;
+};
+
+static int machine__create_extra_kernel_map(struct machine *machine,
+ struct dso *kernel,
+ struct extra_kernel_map *xm)
+{
+ struct kmap *kmap;
+ struct map *map;
+
+ map = map__new2(xm->start, kernel);
+ if (!map)
+ return -1;
+
+ map->end = xm->end;
+ map->pgoff = xm->pgoff;
+
+ kmap = map__kmap(map);
+
+ kmap->kmaps = &machine->kmaps;
+
+ map_groups__insert(&machine->kmaps, map);
+
+ pr_debug2("Added extra kernel map %" PRIx64 "-%" PRIx64 "\n",
+ map->start, map->end);
+
+ map__put(map);
+
+ return 0;
+}
+
+static u64 find_entry_trampoline(struct dso *dso)
+{
+ /* Duplicates are removed so lookup all aliases */
+ const char *syms[] = {
+ "_entry_trampoline",
+ "__entry_trampoline_start",
+ "entry_SYSCALL_64_trampoline",
+ };
+ struct symbol *sym = dso__first_symbol(dso);
+ unsigned int i;
+
+ for (; sym; sym = dso__next_symbol(sym)) {
+ if (sym->binding != STB_GLOBAL)
+ continue;
+ for (i = 0; i < ARRAY_SIZE(syms); i++) {
+ if (!strcmp(sym->name, syms[i]))
+ return sym->start;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * These values can be used for kernels that do not have symbols for the entry
+ * trampolines in kallsyms.
+ */
+#define X86_64_CPU_ENTRY_AREA_PER_CPU 0xfffffe0000000000ULL
+#define X86_64_CPU_ENTRY_AREA_SIZE 0x2c000
+#define X86_64_ENTRY_TRAMPOLINE 0x6000
+
+/* Map x86_64 PTI entry trampolines */
+int machine__map_x86_64_entry_trampolines(struct machine *machine,
+ struct dso *kernel)
+{
+ u64 pgoff = find_entry_trampoline(kernel);
+ int nr_cpus_avail, cpu;
+
+ if (!pgoff)
+ return 0;
+
+ nr_cpus_avail = machine__nr_cpus_avail(machine);
+
+ /* Add a 1 page map for each CPU's entry trampoline */
+ for (cpu = 0; cpu < nr_cpus_avail; cpu++) {
+ u64 va = X86_64_CPU_ENTRY_AREA_PER_CPU +
+ cpu * X86_64_CPU_ENTRY_AREA_SIZE +
+ X86_64_ENTRY_TRAMPOLINE;
+ struct extra_kernel_map xm = {
+ .start = va,
+ .end = va + page_size,
+ .pgoff = pgoff,
+ };
+
+ if (machine__create_extra_kernel_map(machine, kernel, &xm) < 0)
+ return -1;
+ }
+
+ return 0;
+}
+
static int
__machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
{
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index 2d2b092ba753..b6a1c3eb3d65 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -268,4 +268,7 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
*/
char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, char **modp);
+int machine__map_x86_64_entry_trampolines(struct machine *machine,
+ struct dso *kernel);
+
#endif /* __PERF_MACHINE_H */
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 4a39f4d0a174..701144094183 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1490,20 +1490,22 @@ int dso__load(struct dso *dso, struct map *map)
goto out;
}
+ if (map->groups && map->groups->machine)
+ machine = map->groups->machine;
+ else
+ machine = NULL;
+
if (dso->kernel) {
if (dso->kernel == DSO_TYPE_KERNEL)
ret = dso__load_kernel_sym(dso, map);
else if (dso->kernel == DSO_TYPE_GUEST_KERNEL)
ret = dso__load_guest_kernel_sym(dso, map);
+ if (machine__is(machine, "x86_64"))
+ machine__map_x86_64_entry_trampolines(machine, dso);
goto out;
}
- if (map->groups && map->groups->machine)
- machine = map->groups->machine;
- else
- machine = NULL;
-
dso->adjust_symbols = 0;
if (perfmap) {
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 06/17] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (4 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 05/17] perf tools: Workaround missing maps for x86 PTI entry trampolines Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:39 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 07/17] perf tools: Allow for extra kernel maps Adrian Hunter
` (12 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
When kernel symbols are derived from /proc/kallsyms only (not using vmlinux
or /proc/kcore) map_groups__split_kallsyms() is used. However that function
makes assumptions that are not true with entry trampoline symbols. For now,
remove the entry trampoline symbols at that point, as they are no longer
needed at that point.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/map.h | 8 ++++++++
tools/perf/util/symbol.c | 13 +++++++++++++
2 files changed, 21 insertions(+)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index f1afe1ab6ff7..fafcc375ed37 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -8,6 +8,7 @@
#include <linux/rbtree.h>
#include <pthread.h>
#include <stdio.h>
+#include <string.h>
#include <stdbool.h>
#include <linux/types.h>
#include "rwsem.h"
@@ -239,4 +240,11 @@ static inline bool __map__is_kmodule(const struct map *map)
bool map__has_symbols(const struct map *map);
+#define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
+
+static inline bool is_entry_trampoline(const char *name)
+{
+ return !strcmp(name, ENTRY_TRAMPOLINE_NAME);
+}
+
#endif /* __PERF_MAP_H */
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 701144094183..929058da6727 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -737,12 +737,15 @@ static int map_groups__split_kallsyms(struct map_groups *kmaps, struct dso *dso,
struct rb_root *root = &dso->symbols;
struct rb_node *next = rb_first(root);
int kernel_range = 0;
+ bool x86_64;
if (!kmaps)
return -1;
machine = kmaps->machine;
+ x86_64 = machine__is(machine, "x86_64");
+
while (next) {
char *module;
@@ -790,6 +793,16 @@ static int map_groups__split_kallsyms(struct map_groups *kmaps, struct dso *dso,
*/
pos->start = curr_map->map_ip(curr_map, pos->start);
pos->end = curr_map->map_ip(curr_map, pos->end);
+ } else if (x86_64 && is_entry_trampoline(pos->name)) {
+ /*
+ * These symbols are not needed anymore since the
+ * trampoline maps refer to the text section and it's
+ * symbols instead. Avoid having to deal with
+ * relocations, and the assumption that the first symbol
+ * is the start of kernel text, by simply removing the
+ * symbols at this point.
+ */
+ goto discard_symbol;
} else if (curr_map != initial_map) {
char dso_name[PATH_MAX];
struct dso *ndso;
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 07/17] perf tools: Allow for extra kernel maps
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (5 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 06/17] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:39 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 08/17] perf tools: Create maps for x86 PTI entry trampolines Adrian Hunter
` (11 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Identify extra kernel maps by name so that they can be distinguished from
the kernel map and module maps.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/event.c | 2 +-
tools/perf/util/machine.c | 8 ++++++--
tools/perf/util/map.c | 22 ++++++++++++++++++----
tools/perf/util/map.h | 7 ++++++-
tools/perf/util/symbol.c | 7 +++----
5 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 244135b5ea43..aafa9878465f 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -487,7 +487,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool,
for (pos = maps__first(maps); pos; pos = map__next(pos)) {
size_t size;
- if (__map__is_kernel(pos))
+ if (!__map__is_kmodule(pos))
continue;
size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index db695603873b..355d23bcd443 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -856,6 +856,7 @@ struct extra_kernel_map {
u64 start;
u64 end;
u64 pgoff;
+ char name[KMAP_NAME_LEN];
};
static int machine__create_extra_kernel_map(struct machine *machine,
@@ -875,11 +876,12 @@ static int machine__create_extra_kernel_map(struct machine *machine,
kmap = map__kmap(map);
kmap->kmaps = &machine->kmaps;
+ strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
map_groups__insert(&machine->kmaps, map);
- pr_debug2("Added extra kernel map %" PRIx64 "-%" PRIx64 "\n",
- map->start, map->end);
+ pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
+ kmap->name, map->start, map->end);
map__put(map);
@@ -940,6 +942,8 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
.pgoff = pgoff,
};
+ strlcpy(xm.name, ENTRY_TRAMPOLINE_NAME, KMAP_NAME_LEN);
+
if (machine__create_extra_kernel_map(machine, kernel, &xm) < 0)
return -1;
}
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index c8fe836e4c3c..6ae97eda370b 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -252,6 +252,13 @@ bool __map__is_kernel(const struct map *map)
return machine__kernel_map(map->groups->machine) == map;
}
+bool __map__is_extra_kernel_map(const struct map *map)
+{
+ struct kmap *kmap = __map__kmap((struct map *)map);
+
+ return kmap && kmap->name[0];
+}
+
bool map__has_symbols(const struct map *map)
{
return dso__has_symbols(map->dso);
@@ -846,15 +853,22 @@ struct map *map__next(struct map *map)
return NULL;
}
-struct kmap *map__kmap(struct map *map)
+struct kmap *__map__kmap(struct map *map)
{
- if (!map->dso || !map->dso->kernel) {
- pr_err("Internal error: map__kmap with a non-kernel map\n");
+ if (!map->dso || !map->dso->kernel)
return NULL;
- }
return (struct kmap *)(map + 1);
}
+struct kmap *map__kmap(struct map *map)
+{
+ struct kmap *kmap = __map__kmap(map);
+
+ if (!kmap)
+ pr_err("Internal error: map__kmap with a non-kernel map\n");
+ return kmap;
+}
+
struct map_groups *map__kmaps(struct map *map)
{
struct kmap *kmap = map__kmap(map);
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index fafcc375ed37..97e2a063bd65 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -47,9 +47,12 @@ struct map {
refcount_t refcnt;
};
+#define KMAP_NAME_LEN 256
+
struct kmap {
struct ref_reloc_sym *ref_reloc_sym;
struct map_groups *kmaps;
+ char name[KMAP_NAME_LEN];
};
struct maps {
@@ -76,6 +79,7 @@ static inline struct map_groups *map_groups__get(struct map_groups *mg)
void map_groups__put(struct map_groups *mg);
+struct kmap *__map__kmap(struct map *map);
struct kmap *map__kmap(struct map *map);
struct map_groups *map__kmaps(struct map *map);
@@ -232,10 +236,11 @@ int map_groups__fixup_overlappings(struct map_groups *mg, struct map *map,
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name);
bool __map__is_kernel(const struct map *map);
+bool __map__is_extra_kernel_map(const struct map *map);
static inline bool __map__is_kmodule(const struct map *map)
{
- return !__map__is_kernel(map);
+ return !__map__is_kernel(map) && !__map__is_extra_kernel_map(map);
}
bool map__has_symbols(const struct map *map);
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 929058da6727..cdddae67f40c 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1030,7 +1030,7 @@ struct map *map_groups__first(struct map_groups *mg)
return maps__first(&mg->maps);
}
-static int do_validate_kcore_modules(const char *filename, struct map *map,
+static int do_validate_kcore_modules(const char *filename,
struct map_groups *kmaps)
{
struct rb_root modules = RB_ROOT;
@@ -1046,8 +1046,7 @@ static int do_validate_kcore_modules(const char *filename, struct map *map,
struct map *next = map_groups__next(old_map);
struct module_info *mi;
- if (old_map == map || old_map->start == map->start) {
- /* The kernel map */
+ if (!__map__is_kmodule(old_map)) {
old_map = next;
continue;
}
@@ -1104,7 +1103,7 @@ static int validate_kcore_modules(const char *kallsyms_filename,
kallsyms_filename))
return -EINVAL;
- if (do_validate_kcore_modules(modules_filename, map, kmaps))
+ if (do_validate_kcore_modules(modules_filename, kmaps))
return -EINVAL;
return 0;
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 08/17] perf tools: Create maps for x86 PTI entry trampolines
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (6 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 07/17] perf tools: Allow for extra kernel maps Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:40 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 09/17] perf tools: Synthesize and process mmap events " Adrian Hunter
` (10 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Create maps for x86 PTI entry trampolines, based on symbols found in
kallsyms. It is also necessary to keep track of whether the trampolines
have been mapped particularly when the kernel dso is kcore.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/arch/x86/util/Build | 1 +
tools/perf/arch/x86/util/machine.c | 103 +++++++++++++++++++++++++++++++++++++
tools/perf/util/machine.c | 66 +++++++++++++++++-------
tools/perf/util/machine.h | 19 +++++++
tools/perf/util/symbol.c | 17 ++++++
5 files changed, 187 insertions(+), 19 deletions(-)
create mode 100644 tools/perf/arch/x86/util/machine.c
diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
index f95e6f46ef0d..aa1ce5f6cc00 100644
--- a/tools/perf/arch/x86/util/Build
+++ b/tools/perf/arch/x86/util/Build
@@ -4,6 +4,7 @@ libperf-y += pmu.o
libperf-y += kvm-stat.o
libperf-y += perf_regs.o
libperf-y += group.o
+libperf-y += machine.o
libperf-$(CONFIG_DWARF) += dwarf-regs.o
libperf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
diff --git a/tools/perf/arch/x86/util/machine.c b/tools/perf/arch/x86/util/machine.c
new file mode 100644
index 000000000000..50dd6d0426a9
--- /dev/null
+++ b/tools/perf/arch/x86/util/machine.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/types.h>
+#include <linux/string.h>
+#include <stdlib.h>
+
+#include "../../util/machine.h"
+#include "../../util/map.h"
+#include "../../util/symbol.h"
+#include "../../util/sane_ctype.h"
+
+#include <symbol/kallsyms.h>
+
+#if defined(__x86_64__)
+
+struct extra_kernel_map_info {
+ int cnt;
+ int max_cnt;
+ struct extra_kernel_map *maps;
+ bool get_entry_trampolines;
+ u64 entry_trampoline;
+};
+
+static int add_extra_kernel_map(struct extra_kernel_map_info *mi, u64 start,
+ u64 end, u64 pgoff, const char *name)
+{
+ if (mi->cnt >= mi->max_cnt) {
+ void *buf;
+ size_t sz;
+
+ mi->max_cnt = mi->max_cnt ? mi->max_cnt * 2 : 32;
+ sz = sizeof(struct extra_kernel_map) * mi->max_cnt;
+ buf = realloc(mi->maps, sz);
+ if (!buf)
+ return -1;
+ mi->maps = buf;
+ }
+
+ mi->maps[mi->cnt].start = start;
+ mi->maps[mi->cnt].end = end;
+ mi->maps[mi->cnt].pgoff = pgoff;
+ strlcpy(mi->maps[mi->cnt].name, name, KMAP_NAME_LEN);
+
+ mi->cnt += 1;
+
+ return 0;
+}
+
+static int find_extra_kernel_maps(void *arg, const char *name, char type,
+ u64 start)
+{
+ struct extra_kernel_map_info *mi = arg;
+
+ if (!mi->entry_trampoline && kallsyms2elf_binding(type) == STB_GLOBAL &&
+ !strcmp(name, "_entry_trampoline")) {
+ mi->entry_trampoline = start;
+ return 0;
+ }
+
+ if (is_entry_trampoline(name)) {
+ u64 end = start + page_size;
+
+ return add_extra_kernel_map(mi, start, end, 0, name);
+ }
+
+ return 0;
+}
+
+int machine__create_extra_kernel_maps(struct machine *machine,
+ struct dso *kernel)
+{
+ struct extra_kernel_map_info mi = {0};
+ char filename[PATH_MAX];
+ int ret;
+ int i;
+
+ machine__get_kallsyms_filename(machine, filename, PATH_MAX);
+
+ if (symbol__restricted_filename(filename, "/proc/kallsyms"))
+ return 0;
+
+ ret = kallsyms__parse(filename, &mi, find_extra_kernel_maps);
+ if (ret)
+ goto out_free;
+
+ if (!mi.entry_trampoline)
+ goto out_free;
+
+ for (i = 0; i < mi.cnt; i++) {
+ struct extra_kernel_map *xm = &mi.maps[i];
+
+ xm->pgoff = mi.entry_trampoline;
+ ret = machine__create_extra_kernel_map(machine, kernel, xm);
+ if (ret)
+ goto out_free;
+ }
+
+ machine->trampolines_mapped = mi.cnt;
+out_free:
+ free(mi.maps);
+ return ret;
+}
+
+#endif
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 355d23bcd443..dd7ab0731167 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -807,8 +807,8 @@ struct process_args {
u64 start;
};
-static void machine__get_kallsyms_filename(struct machine *machine, char *buf,
- size_t bufsz)
+void machine__get_kallsyms_filename(struct machine *machine, char *buf,
+ size_t bufsz)
{
if (machine__is_default_guest(machine))
scnprintf(buf, bufsz, "%s", symbol_conf.default_guest_kallsyms);
@@ -851,17 +851,9 @@ static int machine__get_running_kernel_start(struct machine *machine,
return 0;
}
-/* Kernel-space maps for symbols that are outside the main kernel map and module maps */
-struct extra_kernel_map {
- u64 start;
- u64 end;
- u64 pgoff;
- char name[KMAP_NAME_LEN];
-};
-
-static int machine__create_extra_kernel_map(struct machine *machine,
- struct dso *kernel,
- struct extra_kernel_map *xm)
+int machine__create_extra_kernel_map(struct machine *machine,
+ struct dso *kernel,
+ struct extra_kernel_map *xm)
{
struct kmap *kmap;
struct map *map;
@@ -923,9 +915,33 @@ static u64 find_entry_trampoline(struct dso *dso)
int machine__map_x86_64_entry_trampolines(struct machine *machine,
struct dso *kernel)
{
- u64 pgoff = find_entry_trampoline(kernel);
+ struct map_groups *kmaps = &machine->kmaps;
+ struct maps *maps = &kmaps->maps;
int nr_cpus_avail, cpu;
+ bool found = false;
+ struct map *map;
+ u64 pgoff;
+
+ /*
+ * In the vmlinux case, pgoff is a virtual address which must now be
+ * mapped to a vmlinux offset.
+ */
+ for (map = maps__first(maps); map; map = map__next(map)) {
+ struct kmap *kmap = __map__kmap(map);
+ struct map *dest_map;
+
+ if (!kmap || !is_entry_trampoline(kmap->name))
+ continue;
+
+ dest_map = map_groups__find(kmaps, map->pgoff);
+ if (dest_map != map)
+ map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
+ found = true;
+ }
+ if (found || machine->trampolines_mapped)
+ return 0;
+ pgoff = find_entry_trampoline(kernel);
if (!pgoff)
return 0;
@@ -948,6 +964,14 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
return -1;
}
+ machine->trampolines_mapped = nr_cpus_avail;
+
+ return 0;
+}
+
+int __weak machine__create_extra_kernel_maps(struct machine *machine __maybe_unused,
+ struct dso *kernel __maybe_unused)
+{
return 0;
}
@@ -1306,9 +1330,8 @@ int machine__create_kernel_maps(struct machine *machine)
return -1;
ret = __machine__create_kernel_maps(machine, kernel);
- dso__put(kernel);
if (ret < 0)
- return -1;
+ goto out_put;
if (symbol_conf.use_modules && machine__create_modules(machine) < 0) {
if (machine__is_host(machine))
@@ -1323,7 +1346,8 @@ int machine__create_kernel_maps(struct machine *machine)
if (name &&
map__set_kallsyms_ref_reloc_sym(machine->vmlinux_map, name, addr)) {
machine__destroy_kernel_maps(machine);
- return -1;
+ ret = -1;
+ goto out_put;
}
/* we have a real start address now, so re-order the kmaps */
@@ -1339,12 +1363,16 @@ int machine__create_kernel_maps(struct machine *machine)
map__put(map);
}
+ if (machine__create_extra_kernel_maps(machine, kernel))
+ pr_debug("Problems creating extra kernel maps, continuing anyway...\n");
+
/* update end address of the kernel map using adjacent module address */
map = map__next(machine__kernel_map(machine));
if (map)
machine__set_kernel_mmap(machine, addr, map->start);
-
- return 0;
+out_put:
+ dso__put(kernel);
+ return ret;
}
static bool machine__uses_kcore(struct machine *machine)
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index b6a1c3eb3d65..1de7660d93e9 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -56,6 +56,7 @@ struct machine {
void *priv;
u64 db_id;
};
+ bool trampolines_mapped;
};
static inline struct threads *machine__threads(struct machine *machine, pid_t tid)
@@ -268,6 +269,24 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
*/
char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, char **modp);
+void machine__get_kallsyms_filename(struct machine *machine, char *buf,
+ size_t bufsz);
+
+int machine__create_extra_kernel_maps(struct machine *machine,
+ struct dso *kernel);
+
+/* Kernel-space maps for symbols that are outside the main kernel map and module maps */
+struct extra_kernel_map {
+ u64 start;
+ u64 end;
+ u64 pgoff;
+ char name[KMAP_NAME_LEN];
+};
+
+int machine__create_extra_kernel_map(struct machine *machine,
+ struct dso *kernel,
+ struct extra_kernel_map *xm);
+
int machine__map_x86_64_entry_trampolines(struct machine *machine,
struct dso *kernel);
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index cdddae67f40c..8c84437f2a10 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1158,6 +1158,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
struct map_groups *kmaps = map__kmaps(map);
struct kcore_mapfn_data md;
struct map *old_map, *new_map, *replacement_map = NULL;
+ struct machine *machine;
bool is_64_bit;
int err, fd;
char kcore_filename[PATH_MAX];
@@ -1166,6 +1167,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
if (!kmaps)
return -EINVAL;
+ machine = kmaps->machine;
+
/* This function requires that the map is the kernel map */
if (!__map__is_kernel(map))
return -EINVAL;
@@ -1209,6 +1212,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
map_groups__remove(kmaps, old_map);
old_map = next;
}
+ machine->trampolines_mapped = false;
/* Find the kernel map using the '_stext' symbol */
if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
@@ -1245,6 +1249,19 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
map__put(new_map);
}
+ if (machine__is(machine, "x86_64")) {
+ u64 addr;
+
+ /*
+ * If one of the corresponding symbols is there, assume the
+ * entry trampoline maps are too.
+ */
+ if (!kallsyms__get_function_start(kallsyms_filename,
+ ENTRY_TRAMPOLINE_NAME,
+ &addr))
+ machine->trampolines_mapped = true;
+ }
+
/*
* Set the data type and long name so that kcore can be read via
* dso__data_read_addr().
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 09/17] perf tools: Synthesize and process mmap events for x86 PTI entry trampolines
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (7 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 08/17] perf tools: Create maps for x86 PTI entry trampolines Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:40 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 10/17] perf buildid-cache: kcore_copy: Keep phdr data in a list Adrian Hunter
` (9 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Like the kernel text, the location of x86 PTI entry trampolines must be
recorded in the perf.data file. Like the kernel, synthesize a mmap event
for that, and add processing for it.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/arch/x86/util/Build | 1 +
tools/perf/arch/x86/util/event.c | 76 ++++++++++++++++++++++++++++++++++++++++
tools/perf/util/event.c | 34 ++++++++++++++----
tools/perf/util/event.h | 8 +++++
tools/perf/util/machine.c | 28 +++++++++++++++
5 files changed, 140 insertions(+), 7 deletions(-)
create mode 100644 tools/perf/arch/x86/util/event.c
diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
index aa1ce5f6cc00..844b8f335532 100644
--- a/tools/perf/arch/x86/util/Build
+++ b/tools/perf/arch/x86/util/Build
@@ -5,6 +5,7 @@ libperf-y += kvm-stat.o
libperf-y += perf_regs.o
libperf-y += group.o
libperf-y += machine.o
+libperf-y += event.o
libperf-$(CONFIG_DWARF) += dwarf-regs.o
libperf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
new file mode 100644
index 000000000000..675a0213044d
--- /dev/null
+++ b/tools/perf/arch/x86/util/event.c
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/types.h>
+#include <linux/string.h>
+
+#include "../../util/machine.h"
+#include "../../util/tool.h"
+#include "../../util/map.h"
+#include "../../util/util.h"
+#include "../../util/debug.h"
+
+#if defined(__x86_64__)
+
+int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine)
+{
+ int rc = 0;
+ struct map *pos;
+ struct map_groups *kmaps = &machine->kmaps;
+ struct maps *maps = &kmaps->maps;
+ union perf_event *event = zalloc(sizeof(event->mmap) +
+ machine->id_hdr_size);
+
+ if (!event) {
+ pr_debug("Not enough memory synthesizing mmap event "
+ "for extra kernel maps\n");
+ return -1;
+ }
+
+ for (pos = maps__first(maps); pos; pos = map__next(pos)) {
+ struct kmap *kmap;
+ size_t size;
+
+ if (!__map__is_extra_kernel_map(pos))
+ continue;
+
+ kmap = map__kmap(pos);
+
+ size = sizeof(event->mmap) - sizeof(event->mmap.filename) +
+ PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) +
+ machine->id_hdr_size;
+
+ memset(event, 0, size);
+
+ event->mmap.header.type = PERF_RECORD_MMAP;
+
+ /*
+ * kernel uses 0 for user space maps, see kernel/perf_event.c
+ * __perf_event_mmap
+ */
+ if (machine__is_host(machine))
+ event->header.misc = PERF_RECORD_MISC_KERNEL;
+ else
+ event->header.misc = PERF_RECORD_MISC_GUEST_KERNEL;
+
+ event->mmap.header.size = size;
+
+ event->mmap.start = pos->start;
+ event->mmap.len = pos->end - pos->start;
+ event->mmap.pgoff = pos->pgoff;
+ event->mmap.pid = machine->pid;
+
+ strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
+
+ if (perf_tool__process_synth_event(tool, event, machine,
+ process) != 0) {
+ rc = -1;
+ break;
+ }
+ }
+
+ free(event);
+ return rc;
+}
+
+#endif
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index aafa9878465f..0c8ecf0c78a4 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -88,10 +88,10 @@ static const char *perf_ns__name(unsigned int id)
return perf_ns__names[id];
}
-static int perf_tool__process_synth_event(struct perf_tool *tool,
- union perf_event *event,
- struct machine *machine,
- perf_event__handler_t process)
+int perf_tool__process_synth_event(struct perf_tool *tool,
+ union perf_event *event,
+ struct machine *machine,
+ perf_event__handler_t process)
{
struct perf_sample synth_sample = {
.pid = -1,
@@ -888,9 +888,16 @@ int kallsyms__get_function_start(const char *kallsyms_filename,
return 0;
}
-int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
- perf_event__handler_t process,
- struct machine *machine)
+int __weak perf_event__synthesize_extra_kmaps(struct perf_tool *tool __maybe_unused,
+ perf_event__handler_t process __maybe_unused,
+ struct machine *machine __maybe_unused)
+{
+ return 0;
+}
+
+static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine)
{
size_t size;
struct map *map = machine__kernel_map(machine);
@@ -943,6 +950,19 @@ int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
return err;
}
+int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine)
+{
+ int err;
+
+ err = __perf_event__synthesize_kernel_mmap(tool, process, machine);
+ if (err < 0)
+ return err;
+
+ return perf_event__synthesize_extra_kmaps(tool, process, machine);
+}
+
int perf_event__synthesize_thread_map2(struct perf_tool *tool,
struct thread_map *threads,
perf_event__handler_t process,
diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
index 0f794744919c..bfa60bcafbde 100644
--- a/tools/perf/util/event.h
+++ b/tools/perf/util/event.h
@@ -750,6 +750,10 @@ int perf_event__process_exit(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine);
+int perf_tool__process_synth_event(struct perf_tool *tool,
+ union perf_event *event,
+ struct machine *machine,
+ perf_event__handler_t process);
int perf_event__process(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
@@ -796,6 +800,10 @@ int perf_event__synthesize_mmap_events(struct perf_tool *tool,
bool mmap_data,
unsigned int proc_map_timeout);
+int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine);
+
size_t perf_event__fprintf_comm(union perf_event *event, FILE *fp);
size_t perf_event__fprintf_mmap(union perf_event *event, FILE *fp);
size_t perf_event__fprintf_mmap2(union perf_event *event, FILE *fp);
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index dd7ab0731167..e7b4a8b513f2 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -1387,6 +1387,32 @@ static bool machine__uses_kcore(struct machine *machine)
return false;
}
+static bool perf_event__is_extra_kernel_mmap(struct machine *machine,
+ union perf_event *event)
+{
+ return machine__is(machine, "x86_64") &&
+ is_entry_trampoline(event->mmap.filename);
+}
+
+static int machine__process_extra_kernel_map(struct machine *machine,
+ union perf_event *event)
+{
+ struct map *kernel_map = machine__kernel_map(machine);
+ struct dso *kernel = kernel_map ? kernel_map->dso : NULL;
+ struct extra_kernel_map xm = {
+ .start = event->mmap.start,
+ .end = event->mmap.start + event->mmap.len,
+ .pgoff = event->mmap.pgoff,
+ };
+
+ if (kernel == NULL)
+ return -1;
+
+ strlcpy(xm.name, event->mmap.filename, KMAP_NAME_LEN);
+
+ return machine__create_extra_kernel_map(machine, kernel, &xm);
+}
+
static int machine__process_kernel_mmap_event(struct machine *machine,
union perf_event *event)
{
@@ -1490,6 +1516,8 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
*/
dso__load(kernel, machine__kernel_map(machine));
}
+ } else if (perf_event__is_extra_kernel_mmap(machine, event)) {
+ return machine__process_extra_kernel_map(machine, event);
}
return 0;
out_problem:
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 10/17] perf buildid-cache: kcore_copy: Keep phdr data in a list
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (8 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 09/17] perf tools: Synthesize and process mmap events " Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:41 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 11/17] perf buildid-cache: kcore_copy: Keep a count of phdrs Adrian Hunter
` (8 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Currently, kcore_copy makes 2 program headers, one for the kernel text
(namely kernel_map) and one for the modules (namely modules_map). Now more
program headers are needed, but treating each program header as a special
case results in much more code.
Instead, in preparation to add more program headers, change to keep program
header data (phdr_data) in a list.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 48943b834f11..b13873a6f368 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1388,6 +1388,7 @@ struct phdr_data {
off_t offset;
u64 addr;
u64 len;
+ struct list_head node;
};
struct kcore_copy_info {
@@ -1399,6 +1400,7 @@ struct kcore_copy_info {
u64 last_module_symbol;
struct phdr_data kernel_map;
struct phdr_data modules_map;
+ struct list_head phdrs;
};
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
@@ -1510,6 +1512,11 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (elf_read_maps(elf, true, kcore_copy__read_map, kci) < 0)
return -1;
+ if (kci->kernel_map.len)
+ list_add_tail(&kci->kernel_map.node, &kci->phdrs);
+ if (kci->modules_map.len)
+ list_add_tail(&kci->modules_map.node, &kci->phdrs);
+
return 0;
}
@@ -1678,6 +1685,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
char kcore_filename[PATH_MAX];
char extract_filename[PATH_MAX];
+ INIT_LIST_HEAD(&kci.phdrs);
+
if (kcore_copy__copy_file(from_dir, to_dir, "kallsyms"))
return -1;
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 11/17] perf buildid-cache: kcore_copy: Keep a count of phdrs
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (9 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 10/17] perf buildid-cache: kcore_copy: Keep phdr data in a list Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:42 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 12/17] perf buildid-cache: kcore_copy: Calculate offset from phnum Adrian Hunter
` (7 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
In preparation to add more program headers, keep a count of phdrs.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index b13873a6f368..4e7b71e8ac0e 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1398,6 +1398,7 @@ struct kcore_copy_info {
u64 last_symbol;
u64 first_module;
u64 last_module_symbol;
+ size_t phnum;
struct phdr_data kernel_map;
struct phdr_data modules_map;
struct list_head phdrs;
@@ -1517,6 +1518,8 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (kci->modules_map.len)
list_add_tail(&kci->modules_map.node, &kci->phdrs);
+ kci->phnum = !!kci->kernel_map.len + !!kci->modules_map.len;
+
return 0;
}
@@ -1678,7 +1681,6 @@ int kcore_copy(const char *from_dir, const char *to_dir)
{
struct kcore kcore;
struct kcore extract;
- size_t count = 2;
int idx = 0, err = -1;
off_t offset = page_size, sz, modules_offset = 0;
struct kcore_copy_info kci = { .stext = 0, };
@@ -1705,10 +1707,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
if (kcore__init(&extract, extract_filename, kcore.elfclass, false))
goto out_kcore_close;
- if (!kci.modules_map.addr)
- count -= 1;
-
- if (kcore__copy_hdr(&kcore, &extract, count))
+ if (kcore__copy_hdr(&kcore, &extract, kci.phnum))
goto out_extract_close;
if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 12/17] perf buildid-cache: kcore_copy: Calculate offset from phnum
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (10 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 11/17] perf buildid-cache: kcore_copy: Keep a count of phdrs Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:42 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 13/17] perf buildid-cache: kcore_copy: Layout sections Adrian Hunter
` (6 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
In preparation to add more program headers, calculate offset from the
number of phdrs.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 4e7b71e8ac0e..4aec12102e19 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1682,7 +1682,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
struct kcore kcore;
struct kcore extract;
int idx = 0, err = -1;
- off_t offset = page_size, sz, modules_offset = 0;
+ off_t offset, sz, modules_offset = 0;
struct kcore_copy_info kci = { .stext = 0, };
char kcore_filename[PATH_MAX];
char extract_filename[PATH_MAX];
@@ -1710,6 +1710,10 @@ int kcore_copy(const char *from_dir, const char *to_dir)
if (kcore__copy_hdr(&kcore, &extract, kci.phnum))
goto out_extract_close;
+ offset = gelf_fsize(extract.elf, ELF_T_EHDR, 1, EV_CURRENT) +
+ gelf_fsize(extract.elf, ELF_T_PHDR, kci.phnum, EV_CURRENT);
+ offset = round_up(offset, page_size);
+
if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
kci.kernel_map.len))
goto out_extract_close;
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 13/17] perf buildid-cache: kcore_copy: Layout sections
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (11 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 12/17] perf buildid-cache: kcore_copy: Calculate offset from phnum Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:43 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 14/17] perf buildid-cache: kcore_copy: Iterate phdrs Adrian Hunter
` (5 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
In preparation to add more program headers, layout the relative offset of
each section.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 4aec12102e19..3e76a0efd15c 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1386,6 +1386,7 @@ static off_t kcore__write(struct kcore *kcore)
struct phdr_data {
off_t offset;
+ off_t rel;
u64 addr;
u64 len;
struct list_head node;
@@ -1404,6 +1405,9 @@ struct kcore_copy_info {
struct list_head phdrs;
};
+#define kcore_copy__for_each_phdr(k, p) \
+ list_for_each_entry((p), &(k)->phdrs, node)
+
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
u64 start)
{
@@ -1518,11 +1522,21 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (kci->modules_map.len)
list_add_tail(&kci->modules_map.node, &kci->phdrs);
- kci->phnum = !!kci->kernel_map.len + !!kci->modules_map.len;
-
return 0;
}
+static void kcore_copy__layout(struct kcore_copy_info *kci)
+{
+ struct phdr_data *p;
+ off_t rel = 0;
+
+ kcore_copy__for_each_phdr(kci, p) {
+ p->rel = rel;
+ rel += p->len;
+ kci->phnum += 1;
+ }
+}
+
static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
Elf *elf)
{
@@ -1558,7 +1572,12 @@ static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
if (kci->first_module && !kci->last_module_symbol)
return -1;
- return kcore_copy__read_maps(kci, elf);
+ if (kcore_copy__read_maps(kci, elf))
+ return -1;
+
+ kcore_copy__layout(kci);
+
+ return 0;
}
static int kcore_copy__copy_file(const char *from_dir, const char *to_dir,
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 14/17] perf buildid-cache: kcore_copy: Iterate phdrs
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (12 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 13/17] perf buildid-cache: kcore_copy: Layout sections Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:43 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 15/17] perf buildid-cache: kcore_copy: Get rid of kernel_map Adrian Hunter
` (4 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
In preparation to add more program headers, iterate phdrs instead of
assuming there is only one for the kernel text and one for the modules.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 3e76a0efd15c..91b8cfb045ec 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1701,10 +1701,11 @@ int kcore_copy(const char *from_dir, const char *to_dir)
struct kcore kcore;
struct kcore extract;
int idx = 0, err = -1;
- off_t offset, sz, modules_offset = 0;
+ off_t offset, sz;
struct kcore_copy_info kci = { .stext = 0, };
char kcore_filename[PATH_MAX];
char extract_filename[PATH_MAX];
+ struct phdr_data *p;
INIT_LIST_HEAD(&kci.phdrs);
@@ -1733,14 +1734,10 @@ int kcore_copy(const char *from_dir, const char *to_dir)
gelf_fsize(extract.elf, ELF_T_PHDR, kci.phnum, EV_CURRENT);
offset = round_up(offset, page_size);
- if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
- kci.kernel_map.len))
- goto out_extract_close;
+ kcore_copy__for_each_phdr(&kci, p) {
+ off_t offs = p->rel + offset;
- if (kci.modules_map.addr) {
- modules_offset = offset + kci.kernel_map.len;
- if (kcore__add_phdr(&extract, idx, modules_offset,
- kci.modules_map.addr, kci.modules_map.len))
+ if (kcore__add_phdr(&extract, idx++, offs, p->addr, p->len))
goto out_extract_close;
}
@@ -1748,14 +1745,12 @@ int kcore_copy(const char *from_dir, const char *to_dir)
if (sz < 0 || sz > offset)
goto out_extract_close;
- if (copy_bytes(kcore.fd, kci.kernel_map.offset, extract.fd, offset,
- kci.kernel_map.len))
- goto out_extract_close;
+ kcore_copy__for_each_phdr(&kci, p) {
+ off_t offs = p->rel + offset;
- if (modules_offset && copy_bytes(kcore.fd, kci.modules_map.offset,
- extract.fd, modules_offset,
- kci.modules_map.len))
- goto out_extract_close;
+ if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))
+ goto out_extract_close;
+ }
if (kcore_copy__compare_file(from_dir, to_dir, "modules"))
goto out_extract_close;
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 15/17] perf buildid-cache: kcore_copy: Get rid of kernel_map
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (13 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 14/17] perf buildid-cache: kcore_copy: Iterate phdrs Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:44 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 16/17] perf buildid-cache: kcore_copy: Copy x86 PTI entry trampoline sections Adrian Hunter
` (3 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
In preparation to add more program headers, get rid of kernel_map and
modules_map by moving ->kernel_map and ->modules_map to newly allocated
entries in the ->phdrs list.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 70 ++++++++++++++++++++++++++++++++------------
1 file changed, 52 insertions(+), 18 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 91b8cfb045ec..37d9324c277c 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1400,14 +1400,47 @@ struct kcore_copy_info {
u64 first_module;
u64 last_module_symbol;
size_t phnum;
- struct phdr_data kernel_map;
- struct phdr_data modules_map;
struct list_head phdrs;
};
#define kcore_copy__for_each_phdr(k, p) \
list_for_each_entry((p), &(k)->phdrs, node)
+static struct phdr_data *phdr_data__new(u64 addr, u64 len, off_t offset)
+{
+ struct phdr_data *p = zalloc(sizeof(*p));
+
+ if (p) {
+ p->addr = addr;
+ p->len = len;
+ p->offset = offset;
+ }
+
+ return p;
+}
+
+static struct phdr_data *kcore_copy_info__addnew(struct kcore_copy_info *kci,
+ u64 addr, u64 len,
+ off_t offset)
+{
+ struct phdr_data *p = phdr_data__new(addr, len, offset);
+
+ if (p)
+ list_add_tail(&p->node, &kci->phdrs);
+
+ return p;
+}
+
+static void kcore_copy__free_phdrs(struct kcore_copy_info *kci)
+{
+ struct phdr_data *p, *tmp;
+
+ list_for_each_entry_safe(p, tmp, &kci->phdrs, node) {
+ list_del(&p->node);
+ free(p);
+ }
+}
+
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
u64 start)
{
@@ -1487,15 +1520,18 @@ static int kcore_copy__parse_modules(struct kcore_copy_info *kci,
return 0;
}
-static void kcore_copy__map(struct phdr_data *p, u64 start, u64 end, u64 pgoff,
- u64 s, u64 e)
+static int kcore_copy__map(struct kcore_copy_info *kci, u64 start, u64 end,
+ u64 pgoff, u64 s, u64 e)
{
- if (p->addr || s < start || s >= end)
- return;
+ u64 len, offset;
+
+ if (s < start || s >= end)
+ return 0;
- p->addr = s;
- p->offset = (s - start) + pgoff;
- p->len = e < end ? e - s : end - s;
+ offset = (s - start) + pgoff;
+ len = e < end ? e - s : end - s;
+
+ return kcore_copy_info__addnew(kci, s, len, offset) ? 0 : -1;
}
static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
@@ -1503,11 +1539,12 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
struct kcore_copy_info *kci = data;
u64 end = start + len;
- kcore_copy__map(&kci->kernel_map, start, end, pgoff, kci->stext,
- kci->etext);
+ if (kcore_copy__map(kci, start, end, pgoff, kci->stext, kci->etext))
+ return -1;
- kcore_copy__map(&kci->modules_map, start, end, pgoff, kci->first_module,
- kci->last_module_symbol);
+ if (kcore_copy__map(kci, start, end, pgoff, kci->first_module,
+ kci->last_module_symbol))
+ return -1;
return 0;
}
@@ -1517,11 +1554,6 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (elf_read_maps(elf, true, kcore_copy__read_map, kci) < 0)
return -1;
- if (kci->kernel_map.len)
- list_add_tail(&kci->kernel_map.node, &kci->phdrs);
- if (kci->modules_map.len)
- list_add_tail(&kci->modules_map.node, &kci->phdrs);
-
return 0;
}
@@ -1773,6 +1805,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
if (err)
kcore_copy__unlink(to_dir, "kallsyms");
+ kcore_copy__free_phdrs(&kci);
+
return err;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 16/17] perf buildid-cache: kcore_copy: Copy x86 PTI entry trampoline sections
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (14 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 15/17] perf buildid-cache: kcore_copy: Get rid of kernel_map Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:44 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 17/17] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text Adrian Hunter
` (2 subsequent siblings)
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
Identify and copy any sections for x86 PTI entry trampolines.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 37d9324c277c..584966913aeb 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1392,6 +1392,11 @@ struct phdr_data {
struct list_head node;
};
+struct sym_data {
+ u64 addr;
+ struct list_head node;
+};
+
struct kcore_copy_info {
u64 stext;
u64 etext;
@@ -1401,6 +1406,7 @@ struct kcore_copy_info {
u64 last_module_symbol;
size_t phnum;
struct list_head phdrs;
+ struct list_head syms;
};
#define kcore_copy__for_each_phdr(k, p) \
@@ -1441,6 +1447,29 @@ static void kcore_copy__free_phdrs(struct kcore_copy_info *kci)
}
}
+static struct sym_data *kcore_copy__new_sym(struct kcore_copy_info *kci,
+ u64 addr)
+{
+ struct sym_data *s = zalloc(sizeof(*s));
+
+ if (s) {
+ s->addr = addr;
+ list_add_tail(&s->node, &kci->syms);
+ }
+
+ return s;
+}
+
+static void kcore_copy__free_syms(struct kcore_copy_info *kci)
+{
+ struct sym_data *s, *tmp;
+
+ list_for_each_entry_safe(s, tmp, &kci->syms, node) {
+ list_del(&s->node);
+ free(s);
+ }
+}
+
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
u64 start)
{
@@ -1471,6 +1500,9 @@ static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
return 0;
}
+ if (is_entry_trampoline(name) && !kcore_copy__new_sym(kci, start))
+ return -1;
+
return 0;
}
@@ -1538,6 +1570,7 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
{
struct kcore_copy_info *kci = data;
u64 end = start + len;
+ struct sym_data *sdat;
if (kcore_copy__map(kci, start, end, pgoff, kci->stext, kci->etext))
return -1;
@@ -1546,6 +1579,13 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
kci->last_module_symbol))
return -1;
+ list_for_each_entry(sdat, &kci->syms, node) {
+ u64 s = round_down(sdat->addr, page_size);
+
+ if (kcore_copy__map(kci, start, end, pgoff, s, s + len))
+ return -1;
+ }
+
return 0;
}
@@ -1740,6 +1780,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
struct phdr_data *p;
INIT_LIST_HEAD(&kci.phdrs);
+ INIT_LIST_HEAD(&kci.syms);
if (kcore_copy__copy_file(from_dir, to_dir, "kallsyms"))
return -1;
@@ -1806,6 +1847,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
kcore_copy__unlink(to_dir, "kallsyms");
kcore_copy__free_phdrs(&kci);
+ kcore_copy__free_syms(&kci);
return err;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH V3 17/17] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (15 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 16/17] perf buildid-cache: kcore_copy: Copy x86 PTI entry trampoline sections Adrian Hunter
@ 2018-05-22 10:54 ` Adrian Hunter
2018-05-24 5:45 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-23 19:35 ` [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Arnaldo Carvalho de Melo
2018-05-31 12:09 ` Adrian Hunter
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-22 10:54 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
x86 PTI entry trampolines all map to the same physical page. If that is
reflected in the program headers of /proc/kcore, then do the same for the
copy of kcore.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
tools/perf/util/symbol-elf.c | 53 ++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 584966913aeb..29770ea61768 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1390,6 +1390,7 @@ struct phdr_data {
u64 addr;
u64 len;
struct list_head node;
+ struct phdr_data *remaps;
};
struct sym_data {
@@ -1597,16 +1598,62 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
return 0;
}
+static void kcore_copy__find_remaps(struct kcore_copy_info *kci)
+{
+ struct phdr_data *p, *k = NULL;
+ u64 kend;
+
+ if (!kci->stext)
+ return;
+
+ /* Find phdr that corresponds to the kernel map (contains stext) */
+ kcore_copy__for_each_phdr(kci, p) {
+ u64 pend = p->addr + p->len - 1;
+
+ if (p->addr <= kci->stext && pend >= kci->stext) {
+ k = p;
+ break;
+ }
+ }
+
+ if (!k)
+ return;
+
+ kend = k->offset + k->len;
+
+ /* Find phdrs that remap the kernel */
+ kcore_copy__for_each_phdr(kci, p) {
+ u64 pend = p->offset + p->len;
+
+ if (p == k)
+ continue;
+
+ if (p->offset >= k->offset && pend <= kend)
+ p->remaps = k;
+ }
+}
+
static void kcore_copy__layout(struct kcore_copy_info *kci)
{
struct phdr_data *p;
off_t rel = 0;
+ kcore_copy__find_remaps(kci);
+
kcore_copy__for_each_phdr(kci, p) {
- p->rel = rel;
- rel += p->len;
+ if (!p->remaps) {
+ p->rel = rel;
+ rel += p->len;
+ }
kci->phnum += 1;
}
+
+ kcore_copy__for_each_phdr(kci, p) {
+ struct phdr_data *k = p->remaps;
+
+ if (k)
+ p->rel = p->offset - k->offset + k->rel;
+ }
}
static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
@@ -1821,6 +1868,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
kcore_copy__for_each_phdr(&kci, p) {
off_t offs = p->rel + offset;
+ if (p->remaps)
+ continue;
if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))
goto out_extract_close;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH V3 00/17] perf tools and x86 PTI entry trampolines
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (16 preceding siblings ...)
2018-05-22 10:54 ` [PATCH V3 17/17] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text Adrian Hunter
@ 2018-05-23 19:35 ` Arnaldo Carvalho de Melo
2018-05-24 9:23 ` Adrian Hunter
2018-05-31 12:09 ` Adrian Hunter
18 siblings, 1 reply; 41+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-23 19:35 UTC (permalink / raw)
To: Adrian Hunter
Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
Joerg Roedel, Jiri Olsa, linux-kernel, x86
Em Tue, May 22, 2018 at 01:54:28PM +0300, Adrian Hunter escreveu:
> Original Cover email:
>
> Perf tools do not know about x86 PTI entry trampolines - see example
> below. These patches add a workaround, namely "perf tools: Workaround
> missing maps for x86 PTI entry trampolines", which has the limitation
> that it hard codes the addresses. Note that the workaround will work for
> old kernels and old perf.data files, but not for future kernels if the
> trampoline addresses are ever changed.
>
> At present, perf tools uses /proc/kallsyms to construct a memory map for
> the kernel. Recording such a map in the perf.data file is necessary to
> deal with kernel relocation and KASLR.
>
> While it is reasonable on its own terms, to add symbols for the trampolines
> to /proc/kallsyms, the motivation here is to have perf tools use them to
> create memory maps in the same fashion as is done for the kernel text.
>
> So the first 2 patches add symbols to /proc/kallsyms for the trampolines:
>
> kallsyms: Simplify update_iter_mod()
> kallsyms, x86: Export addresses of syscall trampolines
>
> perf tools have the ability to use /proc/kcore (in conjunction with
> /proc/kallsyms) as the kernel image. So the next 2 patches add program
> headers for the trampolines to the kcore ELF:
>
> x86: Add entry trampolines to kcore
> x86: kcore: Give entry trampolines all the same offset in kcore
>
> It is worth noting that, with the kcore changes alone, perf tools require
> no changes to recognise the trampolines when using /proc/kcore.
>
> Similarly, if perf tools are used with a matching kallsyms only (by denying
> access to /proc/kcore or a vmlinux image), then the kallsyms patches are
> sufficient to recognise the trampolines with no changes needed to the
> tools.
>
> However, in the general case, when using vmlinux or dealing with
> relocations, perf tools needs memory maps for the trampolines. Because the
> kernel text map is constructed as a special case, using the same approach
> for the trampolines means treating them as a special case also, which
> requires a number of changes to perf tools, and the remaining patches deal
> with that.
>
>
> Example: make a program that does lots of small syscalls e.g.
>
> $ cat uname_x_n.c
>
> #include <sys/utsname.h>
> #include <stdlib.h>
>
> int main(int argc, char *argv[])
> {
> long n = argc > 1 ? strtol(argv[1], NULL, 0) : 0;
> struct utsname u;
>
> while (n--)
> uname(&u);
>
> return 0;
> }
>
> and then:
>
> sudo perf record uname_x_n 100000
> sudo perf report --stdio
>
> Before the changes, there are unknown symbols:
>
> # Overhead Command Shared Object Symbol
> # ........ ......... ................ ..................................
> #
> 41.91% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
> 19.22% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
> 18.70% uname_x_n [unknown] [k] 0xfffffe00000e201b
> 4.09% uname_x_n libc-2.19.so [.] __GI___uname
> 3.08% uname_x_n [kernel.vmlinux] [k] do_syscall_64
> 3.02% uname_x_n [unknown] [k] 0xfffffe00000e2025
> 2.32% uname_x_n [kernel.vmlinux] [k] down_read
> 2.27% uname_x_n ld-2.19.so [.] _dl_start
> 1.97% uname_x_n [unknown] [k] 0xfffffe00000e201e
> 1.25% uname_x_n [kernel.vmlinux] [k] up_read
> 1.02% uname_x_n [unknown] [k] 0xfffffe00000e200c
> 0.99% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
> 0.16% uname_x_n [kernel.vmlinux] [k] flush_signal_handlers
> 0.01% perf [kernel.vmlinux] [k] native_sched_clock
> 0.00% perf [kernel.vmlinux] [k] native_write_msr
>
> After the changes there are not:
>
> # Overhead Command Shared Object Symbol
> # ........ ......... ................ ..................................
> #
> 41.91% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
> 24.70% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64_trampoline
> 19.22% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
> 4.09% uname_x_n libc-2.19.so [.] __GI___uname
> 3.08% uname_x_n [kernel.vmlinux] [k] do_syscall_64
> 2.32% uname_x_n [kernel.vmlinux] [k] down_read
> 2.27% uname_x_n ld-2.19.so [.] _dl_start
> 1.25% uname_x_n [kernel.vmlinux] [k] up_read
> 0.99% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
> 0.16% uname_x_n [kernel.vmlinux] [k] flush_signal_handlers
> 0.01% perf [kernel.vmlinux] [k] native_sched_clock
> 0.00% perf [kernel.vmlinux] [k] native_write_msr
So, with just the userspace patches I get, recording with the new tool,
and then report'ing with old and new tools:
Before:
[root@seventh c]# perf-4.17.rc6.ga048a0-torvalds.master report --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 83 of event 'cycles:ppp'
# Event count (approx.): 86724689
#
# Overhead Command Shared Object Symbol
# ........ ......... ................ ..................................
#
35.12% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
20.86% uname_x_n [unknown] [k] 0xfffffe000005e01b
11.09% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
8.58% uname_x_n [kernel.vmlinux] [k] __x64_sys_newuname
4.93% uname_x_n libc-2.26.so [.] __GI___uname
2.92% uname_x_n ld-2.26.so [.] dl_main
2.66% uname_x_n [kernel.vmlinux] [k] __x86_indirect_thunk_rax
2.46% uname_x_n [kernel.vmlinux] [k] do_syscall_64
2.18% uname_x_n [unknown] [k] 0xfffffe000005e01e
2.17% uname_x_n uname_x_n [.] main
2.14% uname_x_n [unknown] [k] 0xfffffe000005e00c
1.98% uname_x_n [unknown] [k] 0xfffffe000005e025
1.37% uname_x_n [kernel.vmlinux] [k] down_read
1.27% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
0.23% uname_x_n [kernel.vmlinux] [k] get_random_u64
0.01% perf [kernel.vmlinux] [k] end_repeat_nmi
0.00% perf [kernel.vmlinux] [k] native_write_msr
#
# (Tip: Use --symfs <dir> if your symbol files are in non-standard locations)
#
After:
[root@seventh c]# perf report --stdio
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 83 of event 'cycles:ppp'
# Event count (approx.): 86724689
#
# Overhead Command Shared Object Symbol
# ........ ......... ................ ..................................
#
35.12% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
27.18% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64_trampoline
11.09% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
8.58% uname_x_n [kernel.vmlinux] [k] __x64_sys_newuname
4.93% uname_x_n libc-2.26.so [.] __GI___uname
2.92% uname_x_n ld-2.26.so [.] dl_main
2.66% uname_x_n [kernel.vmlinux] [k] __x86_indirect_thunk_rax
2.46% uname_x_n [kernel.vmlinux] [k] do_syscall_64
2.17% uname_x_n uname_x_n [.] main
1.37% uname_x_n [kernel.vmlinux] [k] down_read
1.27% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
0.23% uname_x_n [kernel.vmlinux] [k] get_random_u64
0.01% perf [kernel.vmlinux] [k] end_repeat_nmi
0.00% perf [kernel.vmlinux] [k] native_write_msr
#
# (Tip: Generate a script for your data: perf script -g <lang>)
#
[root@seventh c]#
[root@seventh c]#
What am I missing while testing this,
- Arnaldo
^ permalink raw reply [flat|nested] 41+ messages in thread
* [tip:perf/core] perf machine: Add nr_cpus_avail()
2018-05-22 10:54 ` [PATCH V3 04/17] perf tools: Add machine__nr_cpus_avail() Adrian Hunter
@ 2018-05-24 5:38 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:38 UTC (permalink / raw)
To: linux-tip-commits
Cc: alexander.shishkin, peterz, jolsa, ak, acme, luto, joro, mingo,
hpa, adrian.hunter, linux-kernel, tglx, dave.hansen
Commit-ID: 9cecca325ea879c84fcd31a5e609a514c1a1dbd1
Gitweb: https://git.kernel.org/tip/9cecca325ea879c84fcd31a5e609a514c1a1dbd1
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:32 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Tue, 22 May 2018 10:52:49 -0300
perf machine: Add nr_cpus_avail()
Add a function to return the number of the machine's available CPUs.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-5-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/env.c | 13 +++++++++++++
tools/perf/util/env.h | 1 +
tools/perf/util/machine.c | 5 +++++
tools/perf/util/machine.h | 1 +
4 files changed, 20 insertions(+)
diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
index 319fb0a0d05e..59f38c7693f8 100644
--- a/tools/perf/util/env.c
+++ b/tools/perf/util/env.c
@@ -106,11 +106,24 @@ static int perf_env__read_arch(struct perf_env *env)
return env->arch ? 0 : -ENOMEM;
}
+static int perf_env__read_nr_cpus_avail(struct perf_env *env)
+{
+ if (env->nr_cpus_avail == 0)
+ env->nr_cpus_avail = cpu__max_present_cpu();
+
+ return env->nr_cpus_avail ? 0 : -ENOENT;
+}
+
const char *perf_env__raw_arch(struct perf_env *env)
{
return env && !perf_env__read_arch(env) ? env->arch : "unknown";
}
+int perf_env__nr_cpus_avail(struct perf_env *env)
+{
+ return env && !perf_env__read_nr_cpus_avail(env) ? env->nr_cpus_avail : 0;
+}
+
void cpu_cache_level__free(struct cpu_cache_level *cache)
{
free(cache->type);
diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
index 62e193948608..1f3ccc368530 100644
--- a/tools/perf/util/env.h
+++ b/tools/perf/util/env.h
@@ -77,5 +77,6 @@ void cpu_cache_level__free(struct cpu_cache_level *cache);
const char *perf_env__arch(struct perf_env *env);
const char *perf_env__raw_arch(struct perf_env *env);
+int perf_env__nr_cpus_avail(struct perf_env *env);
#endif /* __PERF_ENV_H */
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index e011a7160380..f62ecd9c36e8 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -2305,6 +2305,11 @@ bool machine__is(struct machine *machine, const char *arch)
return machine && !strcmp(perf_env__raw_arch(machine->env), arch);
}
+int machine__nr_cpus_avail(struct machine *machine)
+{
+ return machine ? perf_env__nr_cpus_avail(machine->env) : 0;
+}
+
int machine__get_kernel_start(struct machine *machine)
{
struct map *map = machine__kernel_map(machine);
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index b31d33b5aa2a..2d2b092ba753 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -189,6 +189,7 @@ static inline bool machine__is_host(struct machine *machine)
}
bool machine__is(struct machine *machine, const char *arch);
+int machine__nr_cpus_avail(struct machine *machine);
struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
struct thread *machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf machine: Workaround missing maps for x86 PTI entry trampolines
2018-05-22 10:54 ` [PATCH V3 05/17] perf tools: Workaround missing maps for x86 PTI entry trampolines Adrian Hunter
@ 2018-05-24 5:38 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:38 UTC (permalink / raw)
To: linux-tip-commits
Cc: jolsa, ak, adrian.hunter, peterz, linux-kernel, mingo, acme,
alexander.shishkin, tglx, joro, hpa, dave.hansen, luto
Commit-ID: 4d99e4136580d178e3523281a820be17bf814bf8
Gitweb: https://git.kernel.org/tip/4d99e4136580d178e3523281a820be17bf814bf8
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:33 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Tue, 22 May 2018 10:54:22 -0300
perf machine: Workaround missing maps for x86 PTI entry trampolines
On x86_64 the PTI entry trampolines are not in the kernel map created by
perf tools. That results in the addresses having no symbols and prevents
annotation. It also causes Intel PT to have decoding errors at the
trampoline addresses.
Workaround that by creating maps for the trampolines.
At present the kernel does not export information revealing where the
trampolines are. Until that happens, the addresses are hardcoded.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-6-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/machine.c | 96 +++++++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/machine.h | 3 ++
tools/perf/util/symbol.c | 12 +++---
3 files changed, 106 insertions(+), 5 deletions(-)
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index f62ecd9c36e8..db695603873b 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -851,6 +851,102 @@ static int machine__get_running_kernel_start(struct machine *machine,
return 0;
}
+/* Kernel-space maps for symbols that are outside the main kernel map and module maps */
+struct extra_kernel_map {
+ u64 start;
+ u64 end;
+ u64 pgoff;
+};
+
+static int machine__create_extra_kernel_map(struct machine *machine,
+ struct dso *kernel,
+ struct extra_kernel_map *xm)
+{
+ struct kmap *kmap;
+ struct map *map;
+
+ map = map__new2(xm->start, kernel);
+ if (!map)
+ return -1;
+
+ map->end = xm->end;
+ map->pgoff = xm->pgoff;
+
+ kmap = map__kmap(map);
+
+ kmap->kmaps = &machine->kmaps;
+
+ map_groups__insert(&machine->kmaps, map);
+
+ pr_debug2("Added extra kernel map %" PRIx64 "-%" PRIx64 "\n",
+ map->start, map->end);
+
+ map__put(map);
+
+ return 0;
+}
+
+static u64 find_entry_trampoline(struct dso *dso)
+{
+ /* Duplicates are removed so lookup all aliases */
+ const char *syms[] = {
+ "_entry_trampoline",
+ "__entry_trampoline_start",
+ "entry_SYSCALL_64_trampoline",
+ };
+ struct symbol *sym = dso__first_symbol(dso);
+ unsigned int i;
+
+ for (; sym; sym = dso__next_symbol(sym)) {
+ if (sym->binding != STB_GLOBAL)
+ continue;
+ for (i = 0; i < ARRAY_SIZE(syms); i++) {
+ if (!strcmp(sym->name, syms[i]))
+ return sym->start;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * These values can be used for kernels that do not have symbols for the entry
+ * trampolines in kallsyms.
+ */
+#define X86_64_CPU_ENTRY_AREA_PER_CPU 0xfffffe0000000000ULL
+#define X86_64_CPU_ENTRY_AREA_SIZE 0x2c000
+#define X86_64_ENTRY_TRAMPOLINE 0x6000
+
+/* Map x86_64 PTI entry trampolines */
+int machine__map_x86_64_entry_trampolines(struct machine *machine,
+ struct dso *kernel)
+{
+ u64 pgoff = find_entry_trampoline(kernel);
+ int nr_cpus_avail, cpu;
+
+ if (!pgoff)
+ return 0;
+
+ nr_cpus_avail = machine__nr_cpus_avail(machine);
+
+ /* Add a 1 page map for each CPU's entry trampoline */
+ for (cpu = 0; cpu < nr_cpus_avail; cpu++) {
+ u64 va = X86_64_CPU_ENTRY_AREA_PER_CPU +
+ cpu * X86_64_CPU_ENTRY_AREA_SIZE +
+ X86_64_ENTRY_TRAMPOLINE;
+ struct extra_kernel_map xm = {
+ .start = va,
+ .end = va + page_size,
+ .pgoff = pgoff,
+ };
+
+ if (machine__create_extra_kernel_map(machine, kernel, &xm) < 0)
+ return -1;
+ }
+
+ return 0;
+}
+
static int
__machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
{
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index 2d2b092ba753..b6a1c3eb3d65 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -268,4 +268,7 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
*/
char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, char **modp);
+int machine__map_x86_64_entry_trampolines(struct machine *machine,
+ struct dso *kernel);
+
#endif /* __PERF_MACHINE_H */
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 4a39f4d0a174..701144094183 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1490,20 +1490,22 @@ int dso__load(struct dso *dso, struct map *map)
goto out;
}
+ if (map->groups && map->groups->machine)
+ machine = map->groups->machine;
+ else
+ machine = NULL;
+
if (dso->kernel) {
if (dso->kernel == DSO_TYPE_KERNEL)
ret = dso__load_kernel_sym(dso, map);
else if (dso->kernel == DSO_TYPE_GUEST_KERNEL)
ret = dso__load_guest_kernel_sym(dso, map);
+ if (machine__is(machine, "x86_64"))
+ machine__map_x86_64_entry_trampolines(machine, dso);
goto out;
}
- if (map->groups && map->groups->machine)
- machine = map->groups->machine;
- else
- machine = NULL;
-
dso->adjust_symbols = 0;
if (perfmap) {
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf machine: Fix map_groups__split_kallsyms() for entry trampoline symbols
2018-05-22 10:54 ` [PATCH V3 06/17] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols Adrian Hunter
@ 2018-05-24 5:39 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:39 UTC (permalink / raw)
To: linux-tip-commits
Cc: dave.hansen, peterz, ak, acme, hpa, mingo, linux-kernel, luto,
jolsa, joro, alexander.shishkin, adrian.hunter, tglx
Commit-ID: 4d004365e25251002935fc3843d80934248ad3ed
Gitweb: https://git.kernel.org/tip/4d004365e25251002935fc3843d80934248ad3ed
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:34 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Tue, 22 May 2018 10:55:59 -0300
perf machine: Fix map_groups__split_kallsyms() for entry trampoline symbols
When kernel symbols are derived from /proc/kallsyms only (not using
vmlinux or /proc/kcore) map_groups__split_kallsyms() is used. However
that function makes assumptions that are not true with entry trampoline
symbols. For now, remove the entry trampoline symbols at that point, as
they are no longer needed at that point.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-7-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/map.h | 8 ++++++++
tools/perf/util/symbol.c | 13 +++++++++++++
2 files changed, 21 insertions(+)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index f1afe1ab6ff7..fafcc375ed37 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -8,6 +8,7 @@
#include <linux/rbtree.h>
#include <pthread.h>
#include <stdio.h>
+#include <string.h>
#include <stdbool.h>
#include <linux/types.h>
#include "rwsem.h"
@@ -239,4 +240,11 @@ static inline bool __map__is_kmodule(const struct map *map)
bool map__has_symbols(const struct map *map);
+#define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
+
+static inline bool is_entry_trampoline(const char *name)
+{
+ return !strcmp(name, ENTRY_TRAMPOLINE_NAME);
+}
+
#endif /* __PERF_MAP_H */
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 701144094183..929058da6727 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -737,12 +737,15 @@ static int map_groups__split_kallsyms(struct map_groups *kmaps, struct dso *dso,
struct rb_root *root = &dso->symbols;
struct rb_node *next = rb_first(root);
int kernel_range = 0;
+ bool x86_64;
if (!kmaps)
return -1;
machine = kmaps->machine;
+ x86_64 = machine__is(machine, "x86_64");
+
while (next) {
char *module;
@@ -790,6 +793,16 @@ static int map_groups__split_kallsyms(struct map_groups *kmaps, struct dso *dso,
*/
pos->start = curr_map->map_ip(curr_map, pos->start);
pos->end = curr_map->map_ip(curr_map, pos->end);
+ } else if (x86_64 && is_entry_trampoline(pos->name)) {
+ /*
+ * These symbols are not needed anymore since the
+ * trampoline maps refer to the text section and it's
+ * symbols instead. Avoid having to deal with
+ * relocations, and the assumption that the first symbol
+ * is the start of kernel text, by simply removing the
+ * symbols at this point.
+ */
+ goto discard_symbol;
} else if (curr_map != initial_map) {
char dso_name[PATH_MAX];
struct dso *ndso;
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf machine: Allow for extra kernel maps
2018-05-22 10:54 ` [PATCH V3 07/17] perf tools: Allow for extra kernel maps Adrian Hunter
@ 2018-05-24 5:39 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:39 UTC (permalink / raw)
To: linux-tip-commits
Cc: dave.hansen, linux-kernel, hpa, joro, luto, tglx,
alexander.shishkin, jolsa, peterz, ak, adrian.hunter, acme,
mingo
Commit-ID: 5759a6820aadd38b2c8c10e93919eae8e31a9f9a
Gitweb: https://git.kernel.org/tip/5759a6820aadd38b2c8c10e93919eae8e31a9f9a
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:35 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Tue, 22 May 2018 10:59:22 -0300
perf machine: Allow for extra kernel maps
Identify extra kernel maps by name so that they can be distinguished
from the kernel map and module maps.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-8-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/event.c | 2 +-
tools/perf/util/machine.c | 8 ++++++--
tools/perf/util/map.c | 22 ++++++++++++++++++----
tools/perf/util/map.h | 7 ++++++-
tools/perf/util/symbol.c | 7 +++----
5 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 244135b5ea43..aafa9878465f 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -487,7 +487,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool,
for (pos = maps__first(maps); pos; pos = map__next(pos)) {
size_t size;
- if (__map__is_kernel(pos))
+ if (!__map__is_kmodule(pos))
continue;
size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index db695603873b..355d23bcd443 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -856,6 +856,7 @@ struct extra_kernel_map {
u64 start;
u64 end;
u64 pgoff;
+ char name[KMAP_NAME_LEN];
};
static int machine__create_extra_kernel_map(struct machine *machine,
@@ -875,11 +876,12 @@ static int machine__create_extra_kernel_map(struct machine *machine,
kmap = map__kmap(map);
kmap->kmaps = &machine->kmaps;
+ strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
map_groups__insert(&machine->kmaps, map);
- pr_debug2("Added extra kernel map %" PRIx64 "-%" PRIx64 "\n",
- map->start, map->end);
+ pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
+ kmap->name, map->start, map->end);
map__put(map);
@@ -940,6 +942,8 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
.pgoff = pgoff,
};
+ strlcpy(xm.name, ENTRY_TRAMPOLINE_NAME, KMAP_NAME_LEN);
+
if (machine__create_extra_kernel_map(machine, kernel, &xm) < 0)
return -1;
}
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index c8fe836e4c3c..6ae97eda370b 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -252,6 +252,13 @@ bool __map__is_kernel(const struct map *map)
return machine__kernel_map(map->groups->machine) == map;
}
+bool __map__is_extra_kernel_map(const struct map *map)
+{
+ struct kmap *kmap = __map__kmap((struct map *)map);
+
+ return kmap && kmap->name[0];
+}
+
bool map__has_symbols(const struct map *map)
{
return dso__has_symbols(map->dso);
@@ -846,15 +853,22 @@ struct map *map__next(struct map *map)
return NULL;
}
-struct kmap *map__kmap(struct map *map)
+struct kmap *__map__kmap(struct map *map)
{
- if (!map->dso || !map->dso->kernel) {
- pr_err("Internal error: map__kmap with a non-kernel map\n");
+ if (!map->dso || !map->dso->kernel)
return NULL;
- }
return (struct kmap *)(map + 1);
}
+struct kmap *map__kmap(struct map *map)
+{
+ struct kmap *kmap = __map__kmap(map);
+
+ if (!kmap)
+ pr_err("Internal error: map__kmap with a non-kernel map\n");
+ return kmap;
+}
+
struct map_groups *map__kmaps(struct map *map)
{
struct kmap *kmap = map__kmap(map);
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index fafcc375ed37..97e2a063bd65 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -47,9 +47,12 @@ struct map {
refcount_t refcnt;
};
+#define KMAP_NAME_LEN 256
+
struct kmap {
struct ref_reloc_sym *ref_reloc_sym;
struct map_groups *kmaps;
+ char name[KMAP_NAME_LEN];
};
struct maps {
@@ -76,6 +79,7 @@ static inline struct map_groups *map_groups__get(struct map_groups *mg)
void map_groups__put(struct map_groups *mg);
+struct kmap *__map__kmap(struct map *map);
struct kmap *map__kmap(struct map *map);
struct map_groups *map__kmaps(struct map *map);
@@ -232,10 +236,11 @@ int map_groups__fixup_overlappings(struct map_groups *mg, struct map *map,
struct map *map_groups__find_by_name(struct map_groups *mg, const char *name);
bool __map__is_kernel(const struct map *map);
+bool __map__is_extra_kernel_map(const struct map *map);
static inline bool __map__is_kmodule(const struct map *map)
{
- return !__map__is_kernel(map);
+ return !__map__is_kernel(map) && !__map__is_extra_kernel_map(map);
}
bool map__has_symbols(const struct map *map);
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 929058da6727..cdddae67f40c 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1030,7 +1030,7 @@ struct map *map_groups__first(struct map_groups *mg)
return maps__first(&mg->maps);
}
-static int do_validate_kcore_modules(const char *filename, struct map *map,
+static int do_validate_kcore_modules(const char *filename,
struct map_groups *kmaps)
{
struct rb_root modules = RB_ROOT;
@@ -1046,8 +1046,7 @@ static int do_validate_kcore_modules(const char *filename, struct map *map,
struct map *next = map_groups__next(old_map);
struct module_info *mi;
- if (old_map == map || old_map->start == map->start) {
- /* The kernel map */
+ if (!__map__is_kmodule(old_map)) {
old_map = next;
continue;
}
@@ -1104,7 +1103,7 @@ static int validate_kcore_modules(const char *kallsyms_filename,
kallsyms_filename))
return -EINVAL;
- if (do_validate_kcore_modules(modules_filename, map, kmaps))
+ if (do_validate_kcore_modules(modules_filename, kmaps))
return -EINVAL;
return 0;
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf machine: Create maps for x86 PTI entry trampolines
2018-05-22 10:54 ` [PATCH V3 08/17] perf tools: Create maps for x86 PTI entry trampolines Adrian Hunter
@ 2018-05-24 5:40 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:40 UTC (permalink / raw)
To: linux-tip-commits
Cc: alexander.shishkin, ak, tglx, jolsa, linux-kernel, acme, peterz,
hpa, adrian.hunter, mingo, luto, joro, dave.hansen
Commit-ID: 1c5aae7710bb9ecf82a5cc88e35a028a8b385763
Gitweb: https://git.kernel.org/tip/1c5aae7710bb9ecf82a5cc88e35a028a8b385763
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:36 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:24:08 -0300
perf machine: Create maps for x86 PTI entry trampolines
Create maps for x86 PTI entry trampolines, based on symbols found in
kallsyms. It is also necessary to keep track of whether the trampolines
have been mapped particularly when the kernel dso is kcore.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-9-git-send-email-adrian.hunter@intel.com
[ Fix extra_kernel_map_info.cnt designed struct initializer on gcc 4.4.7 (centos:6, etc) ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/arch/x86/util/Build | 1 +
tools/perf/arch/x86/util/machine.c | 103 +++++++++++++++++++++++++++++++++++++
tools/perf/util/machine.c | 66 +++++++++++++++++-------
tools/perf/util/machine.h | 19 +++++++
tools/perf/util/symbol.c | 17 ++++++
5 files changed, 187 insertions(+), 19 deletions(-)
diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
index f95e6f46ef0d..aa1ce5f6cc00 100644
--- a/tools/perf/arch/x86/util/Build
+++ b/tools/perf/arch/x86/util/Build
@@ -4,6 +4,7 @@ libperf-y += pmu.o
libperf-y += kvm-stat.o
libperf-y += perf_regs.o
libperf-y += group.o
+libperf-y += machine.o
libperf-$(CONFIG_DWARF) += dwarf-regs.o
libperf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
diff --git a/tools/perf/arch/x86/util/machine.c b/tools/perf/arch/x86/util/machine.c
new file mode 100644
index 000000000000..4520ac53caa9
--- /dev/null
+++ b/tools/perf/arch/x86/util/machine.c
@@ -0,0 +1,103 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/types.h>
+#include <linux/string.h>
+#include <stdlib.h>
+
+#include "../../util/machine.h"
+#include "../../util/map.h"
+#include "../../util/symbol.h"
+#include "../../util/sane_ctype.h"
+
+#include <symbol/kallsyms.h>
+
+#if defined(__x86_64__)
+
+struct extra_kernel_map_info {
+ int cnt;
+ int max_cnt;
+ struct extra_kernel_map *maps;
+ bool get_entry_trampolines;
+ u64 entry_trampoline;
+};
+
+static int add_extra_kernel_map(struct extra_kernel_map_info *mi, u64 start,
+ u64 end, u64 pgoff, const char *name)
+{
+ if (mi->cnt >= mi->max_cnt) {
+ void *buf;
+ size_t sz;
+
+ mi->max_cnt = mi->max_cnt ? mi->max_cnt * 2 : 32;
+ sz = sizeof(struct extra_kernel_map) * mi->max_cnt;
+ buf = realloc(mi->maps, sz);
+ if (!buf)
+ return -1;
+ mi->maps = buf;
+ }
+
+ mi->maps[mi->cnt].start = start;
+ mi->maps[mi->cnt].end = end;
+ mi->maps[mi->cnt].pgoff = pgoff;
+ strlcpy(mi->maps[mi->cnt].name, name, KMAP_NAME_LEN);
+
+ mi->cnt += 1;
+
+ return 0;
+}
+
+static int find_extra_kernel_maps(void *arg, const char *name, char type,
+ u64 start)
+{
+ struct extra_kernel_map_info *mi = arg;
+
+ if (!mi->entry_trampoline && kallsyms2elf_binding(type) == STB_GLOBAL &&
+ !strcmp(name, "_entry_trampoline")) {
+ mi->entry_trampoline = start;
+ return 0;
+ }
+
+ if (is_entry_trampoline(name)) {
+ u64 end = start + page_size;
+
+ return add_extra_kernel_map(mi, start, end, 0, name);
+ }
+
+ return 0;
+}
+
+int machine__create_extra_kernel_maps(struct machine *machine,
+ struct dso *kernel)
+{
+ struct extra_kernel_map_info mi = { .cnt = 0, };
+ char filename[PATH_MAX];
+ int ret;
+ int i;
+
+ machine__get_kallsyms_filename(machine, filename, PATH_MAX);
+
+ if (symbol__restricted_filename(filename, "/proc/kallsyms"))
+ return 0;
+
+ ret = kallsyms__parse(filename, &mi, find_extra_kernel_maps);
+ if (ret)
+ goto out_free;
+
+ if (!mi.entry_trampoline)
+ goto out_free;
+
+ for (i = 0; i < mi.cnt; i++) {
+ struct extra_kernel_map *xm = &mi.maps[i];
+
+ xm->pgoff = mi.entry_trampoline;
+ ret = machine__create_extra_kernel_map(machine, kernel, xm);
+ if (ret)
+ goto out_free;
+ }
+
+ machine->trampolines_mapped = mi.cnt;
+out_free:
+ free(mi.maps);
+ return ret;
+}
+
+#endif
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 355d23bcd443..dd7ab0731167 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -807,8 +807,8 @@ struct process_args {
u64 start;
};
-static void machine__get_kallsyms_filename(struct machine *machine, char *buf,
- size_t bufsz)
+void machine__get_kallsyms_filename(struct machine *machine, char *buf,
+ size_t bufsz)
{
if (machine__is_default_guest(machine))
scnprintf(buf, bufsz, "%s", symbol_conf.default_guest_kallsyms);
@@ -851,17 +851,9 @@ static int machine__get_running_kernel_start(struct machine *machine,
return 0;
}
-/* Kernel-space maps for symbols that are outside the main kernel map and module maps */
-struct extra_kernel_map {
- u64 start;
- u64 end;
- u64 pgoff;
- char name[KMAP_NAME_LEN];
-};
-
-static int machine__create_extra_kernel_map(struct machine *machine,
- struct dso *kernel,
- struct extra_kernel_map *xm)
+int machine__create_extra_kernel_map(struct machine *machine,
+ struct dso *kernel,
+ struct extra_kernel_map *xm)
{
struct kmap *kmap;
struct map *map;
@@ -923,9 +915,33 @@ static u64 find_entry_trampoline(struct dso *dso)
int machine__map_x86_64_entry_trampolines(struct machine *machine,
struct dso *kernel)
{
- u64 pgoff = find_entry_trampoline(kernel);
+ struct map_groups *kmaps = &machine->kmaps;
+ struct maps *maps = &kmaps->maps;
int nr_cpus_avail, cpu;
+ bool found = false;
+ struct map *map;
+ u64 pgoff;
+
+ /*
+ * In the vmlinux case, pgoff is a virtual address which must now be
+ * mapped to a vmlinux offset.
+ */
+ for (map = maps__first(maps); map; map = map__next(map)) {
+ struct kmap *kmap = __map__kmap(map);
+ struct map *dest_map;
+
+ if (!kmap || !is_entry_trampoline(kmap->name))
+ continue;
+
+ dest_map = map_groups__find(kmaps, map->pgoff);
+ if (dest_map != map)
+ map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
+ found = true;
+ }
+ if (found || machine->trampolines_mapped)
+ return 0;
+ pgoff = find_entry_trampoline(kernel);
if (!pgoff)
return 0;
@@ -948,6 +964,14 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
return -1;
}
+ machine->trampolines_mapped = nr_cpus_avail;
+
+ return 0;
+}
+
+int __weak machine__create_extra_kernel_maps(struct machine *machine __maybe_unused,
+ struct dso *kernel __maybe_unused)
+{
return 0;
}
@@ -1306,9 +1330,8 @@ int machine__create_kernel_maps(struct machine *machine)
return -1;
ret = __machine__create_kernel_maps(machine, kernel);
- dso__put(kernel);
if (ret < 0)
- return -1;
+ goto out_put;
if (symbol_conf.use_modules && machine__create_modules(machine) < 0) {
if (machine__is_host(machine))
@@ -1323,7 +1346,8 @@ int machine__create_kernel_maps(struct machine *machine)
if (name &&
map__set_kallsyms_ref_reloc_sym(machine->vmlinux_map, name, addr)) {
machine__destroy_kernel_maps(machine);
- return -1;
+ ret = -1;
+ goto out_put;
}
/* we have a real start address now, so re-order the kmaps */
@@ -1339,12 +1363,16 @@ int machine__create_kernel_maps(struct machine *machine)
map__put(map);
}
+ if (machine__create_extra_kernel_maps(machine, kernel))
+ pr_debug("Problems creating extra kernel maps, continuing anyway...\n");
+
/* update end address of the kernel map using adjacent module address */
map = map__next(machine__kernel_map(machine));
if (map)
machine__set_kernel_mmap(machine, addr, map->start);
-
- return 0;
+out_put:
+ dso__put(kernel);
+ return ret;
}
static bool machine__uses_kcore(struct machine *machine)
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index b6a1c3eb3d65..1de7660d93e9 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -56,6 +56,7 @@ struct machine {
void *priv;
u64 db_id;
};
+ bool trampolines_mapped;
};
static inline struct threads *machine__threads(struct machine *machine, pid_t tid)
@@ -268,6 +269,24 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
*/
char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, char **modp);
+void machine__get_kallsyms_filename(struct machine *machine, char *buf,
+ size_t bufsz);
+
+int machine__create_extra_kernel_maps(struct machine *machine,
+ struct dso *kernel);
+
+/* Kernel-space maps for symbols that are outside the main kernel map and module maps */
+struct extra_kernel_map {
+ u64 start;
+ u64 end;
+ u64 pgoff;
+ char name[KMAP_NAME_LEN];
+};
+
+int machine__create_extra_kernel_map(struct machine *machine,
+ struct dso *kernel,
+ struct extra_kernel_map *xm);
+
int machine__map_x86_64_entry_trampolines(struct machine *machine,
struct dso *kernel);
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index cdddae67f40c..8c84437f2a10 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1158,6 +1158,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
struct map_groups *kmaps = map__kmaps(map);
struct kcore_mapfn_data md;
struct map *old_map, *new_map, *replacement_map = NULL;
+ struct machine *machine;
bool is_64_bit;
int err, fd;
char kcore_filename[PATH_MAX];
@@ -1166,6 +1167,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
if (!kmaps)
return -EINVAL;
+ machine = kmaps->machine;
+
/* This function requires that the map is the kernel map */
if (!__map__is_kernel(map))
return -EINVAL;
@@ -1209,6 +1212,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
map_groups__remove(kmaps, old_map);
old_map = next;
}
+ machine->trampolines_mapped = false;
/* Find the kernel map using the '_stext' symbol */
if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
@@ -1245,6 +1249,19 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
map__put(new_map);
}
+ if (machine__is(machine, "x86_64")) {
+ u64 addr;
+
+ /*
+ * If one of the corresponding symbols is there, assume the
+ * entry trampoline maps are too.
+ */
+ if (!kallsyms__get_function_start(kallsyms_filename,
+ ENTRY_TRAMPOLINE_NAME,
+ &addr))
+ machine->trampolines_mapped = true;
+ }
+
/*
* Set the data type and long name so that kcore can be read via
* dso__data_read_addr().
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf machine: Synthesize and process mmap events for x86 PTI entry trampolines
2018-05-22 10:54 ` [PATCH V3 09/17] perf tools: Synthesize and process mmap events " Adrian Hunter
@ 2018-05-24 5:40 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:40 UTC (permalink / raw)
To: linux-tip-commits
Cc: ak, hpa, alexander.shishkin, luto, acme, linux-kernel, jolsa,
tglx, peterz, dave.hansen, adrian.hunter, joro, mingo
Commit-ID: a8ce99b0ee9ad32debad0a9f28d21451ba237cc1
Gitweb: https://git.kernel.org/tip/a8ce99b0ee9ad32debad0a9f28d21451ba237cc1
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:37 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:39 -0300
perf machine: Synthesize and process mmap events for x86 PTI entry trampolines
Like the kernel text, the location of x86 PTI entry trampolines must be
recorded in the perf.data file. Like the kernel, synthesize a mmap event
for that, and add processing for it.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-10-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/arch/x86/util/Build | 1 +
tools/perf/arch/x86/util/event.c | 76 ++++++++++++++++++++++++++++++++++++++++
tools/perf/util/event.c | 34 ++++++++++++++----
tools/perf/util/event.h | 8 +++++
tools/perf/util/machine.c | 28 +++++++++++++++
5 files changed, 140 insertions(+), 7 deletions(-)
diff --git a/tools/perf/arch/x86/util/Build b/tools/perf/arch/x86/util/Build
index aa1ce5f6cc00..844b8f335532 100644
--- a/tools/perf/arch/x86/util/Build
+++ b/tools/perf/arch/x86/util/Build
@@ -5,6 +5,7 @@ libperf-y += kvm-stat.o
libperf-y += perf_regs.o
libperf-y += group.o
libperf-y += machine.o
+libperf-y += event.o
libperf-$(CONFIG_DWARF) += dwarf-regs.o
libperf-$(CONFIG_BPF_PROLOGUE) += dwarf-regs.o
diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
new file mode 100644
index 000000000000..675a0213044d
--- /dev/null
+++ b/tools/perf/arch/x86/util/event.c
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/types.h>
+#include <linux/string.h>
+
+#include "../../util/machine.h"
+#include "../../util/tool.h"
+#include "../../util/map.h"
+#include "../../util/util.h"
+#include "../../util/debug.h"
+
+#if defined(__x86_64__)
+
+int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine)
+{
+ int rc = 0;
+ struct map *pos;
+ struct map_groups *kmaps = &machine->kmaps;
+ struct maps *maps = &kmaps->maps;
+ union perf_event *event = zalloc(sizeof(event->mmap) +
+ machine->id_hdr_size);
+
+ if (!event) {
+ pr_debug("Not enough memory synthesizing mmap event "
+ "for extra kernel maps\n");
+ return -1;
+ }
+
+ for (pos = maps__first(maps); pos; pos = map__next(pos)) {
+ struct kmap *kmap;
+ size_t size;
+
+ if (!__map__is_extra_kernel_map(pos))
+ continue;
+
+ kmap = map__kmap(pos);
+
+ size = sizeof(event->mmap) - sizeof(event->mmap.filename) +
+ PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) +
+ machine->id_hdr_size;
+
+ memset(event, 0, size);
+
+ event->mmap.header.type = PERF_RECORD_MMAP;
+
+ /*
+ * kernel uses 0 for user space maps, see kernel/perf_event.c
+ * __perf_event_mmap
+ */
+ if (machine__is_host(machine))
+ event->header.misc = PERF_RECORD_MISC_KERNEL;
+ else
+ event->header.misc = PERF_RECORD_MISC_GUEST_KERNEL;
+
+ event->mmap.header.size = size;
+
+ event->mmap.start = pos->start;
+ event->mmap.len = pos->end - pos->start;
+ event->mmap.pgoff = pos->pgoff;
+ event->mmap.pid = machine->pid;
+
+ strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
+
+ if (perf_tool__process_synth_event(tool, event, machine,
+ process) != 0) {
+ rc = -1;
+ break;
+ }
+ }
+
+ free(event);
+ return rc;
+}
+
+#endif
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index aafa9878465f..0c8ecf0c78a4 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -88,10 +88,10 @@ static const char *perf_ns__name(unsigned int id)
return perf_ns__names[id];
}
-static int perf_tool__process_synth_event(struct perf_tool *tool,
- union perf_event *event,
- struct machine *machine,
- perf_event__handler_t process)
+int perf_tool__process_synth_event(struct perf_tool *tool,
+ union perf_event *event,
+ struct machine *machine,
+ perf_event__handler_t process)
{
struct perf_sample synth_sample = {
.pid = -1,
@@ -888,9 +888,16 @@ int kallsyms__get_function_start(const char *kallsyms_filename,
return 0;
}
-int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
- perf_event__handler_t process,
- struct machine *machine)
+int __weak perf_event__synthesize_extra_kmaps(struct perf_tool *tool __maybe_unused,
+ perf_event__handler_t process __maybe_unused,
+ struct machine *machine __maybe_unused)
+{
+ return 0;
+}
+
+static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine)
{
size_t size;
struct map *map = machine__kernel_map(machine);
@@ -943,6 +950,19 @@ int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
return err;
}
+int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine)
+{
+ int err;
+
+ err = __perf_event__synthesize_kernel_mmap(tool, process, machine);
+ if (err < 0)
+ return err;
+
+ return perf_event__synthesize_extra_kmaps(tool, process, machine);
+}
+
int perf_event__synthesize_thread_map2(struct perf_tool *tool,
struct thread_map *threads,
perf_event__handler_t process,
diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
index 0f794744919c..bfa60bcafbde 100644
--- a/tools/perf/util/event.h
+++ b/tools/perf/util/event.h
@@ -750,6 +750,10 @@ int perf_event__process_exit(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
struct machine *machine);
+int perf_tool__process_synth_event(struct perf_tool *tool,
+ union perf_event *event,
+ struct machine *machine,
+ perf_event__handler_t process);
int perf_event__process(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample,
@@ -796,6 +800,10 @@ int perf_event__synthesize_mmap_events(struct perf_tool *tool,
bool mmap_data,
unsigned int proc_map_timeout);
+int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
+ perf_event__handler_t process,
+ struct machine *machine);
+
size_t perf_event__fprintf_comm(union perf_event *event, FILE *fp);
size_t perf_event__fprintf_mmap(union perf_event *event, FILE *fp);
size_t perf_event__fprintf_mmap2(union perf_event *event, FILE *fp);
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index dd7ab0731167..e7b4a8b513f2 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -1387,6 +1387,32 @@ static bool machine__uses_kcore(struct machine *machine)
return false;
}
+static bool perf_event__is_extra_kernel_mmap(struct machine *machine,
+ union perf_event *event)
+{
+ return machine__is(machine, "x86_64") &&
+ is_entry_trampoline(event->mmap.filename);
+}
+
+static int machine__process_extra_kernel_map(struct machine *machine,
+ union perf_event *event)
+{
+ struct map *kernel_map = machine__kernel_map(machine);
+ struct dso *kernel = kernel_map ? kernel_map->dso : NULL;
+ struct extra_kernel_map xm = {
+ .start = event->mmap.start,
+ .end = event->mmap.start + event->mmap.len,
+ .pgoff = event->mmap.pgoff,
+ };
+
+ if (kernel == NULL)
+ return -1;
+
+ strlcpy(xm.name, event->mmap.filename, KMAP_NAME_LEN);
+
+ return machine__create_extra_kernel_map(machine, kernel, &xm);
+}
+
static int machine__process_kernel_mmap_event(struct machine *machine,
union perf_event *event)
{
@@ -1490,6 +1516,8 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
*/
dso__load(kernel, machine__kernel_map(machine));
}
+ } else if (perf_event__is_extra_kernel_mmap(machine, event)) {
+ return machine__process_extra_kernel_map(machine, event);
}
return 0;
out_problem:
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Keep phdr data in a list
2018-05-22 10:54 ` [PATCH V3 10/17] perf buildid-cache: kcore_copy: Keep phdr data in a list Adrian Hunter
@ 2018-05-24 5:41 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:41 UTC (permalink / raw)
To: linux-tip-commits
Cc: dave.hansen, jolsa, alexander.shishkin, hpa, luto, joro, mingo,
tglx, ak, peterz, acme, adrian.hunter, linux-kernel
Commit-ID: f6838209484d5cfb368ca5c61d150cc4054eef59
Gitweb: https://git.kernel.org/tip/f6838209484d5cfb368ca5c61d150cc4054eef59
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:38 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:40 -0300
perf kcore_copy: Keep phdr data in a list
Currently, kcore_copy makes 2 program headers, one for the kernel text
(namely kernel_map) and one for the modules (namely modules_map). Now
more program headers are needed, but treating each program header as a
special case results in much more code.
Instead, in preparation to add more program headers, change to keep
program header data (phdr_data) in a list.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-11-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 48943b834f11..b13873a6f368 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1388,6 +1388,7 @@ struct phdr_data {
off_t offset;
u64 addr;
u64 len;
+ struct list_head node;
};
struct kcore_copy_info {
@@ -1399,6 +1400,7 @@ struct kcore_copy_info {
u64 last_module_symbol;
struct phdr_data kernel_map;
struct phdr_data modules_map;
+ struct list_head phdrs;
};
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
@@ -1510,6 +1512,11 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (elf_read_maps(elf, true, kcore_copy__read_map, kci) < 0)
return -1;
+ if (kci->kernel_map.len)
+ list_add_tail(&kci->kernel_map.node, &kci->phdrs);
+ if (kci->modules_map.len)
+ list_add_tail(&kci->modules_map.node, &kci->phdrs);
+
return 0;
}
@@ -1678,6 +1685,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
char kcore_filename[PATH_MAX];
char extract_filename[PATH_MAX];
+ INIT_LIST_HEAD(&kci.phdrs);
+
if (kcore_copy__copy_file(from_dir, to_dir, "kallsyms"))
return -1;
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Keep a count of phdrs
2018-05-22 10:54 ` [PATCH V3 11/17] perf buildid-cache: kcore_copy: Keep a count of phdrs Adrian Hunter
@ 2018-05-24 5:42 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:42 UTC (permalink / raw)
To: linux-tip-commits
Cc: tglx, luto, dave.hansen, linux-kernel, peterz, hpa, jolsa, acme,
joro, adrian.hunter, ak, mingo, alexander.shishkin
Commit-ID: 6e97957d3d30552c415292bb08a0e5f3c459c027
Gitweb: https://git.kernel.org/tip/6e97957d3d30552c415292bb08a0e5f3c459c027
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:39 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:41 -0300
perf kcore_copy: Keep a count of phdrs
In preparation to add more program headers, keep a count of phdrs.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-12-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index b13873a6f368..4e7b71e8ac0e 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1398,6 +1398,7 @@ struct kcore_copy_info {
u64 last_symbol;
u64 first_module;
u64 last_module_symbol;
+ size_t phnum;
struct phdr_data kernel_map;
struct phdr_data modules_map;
struct list_head phdrs;
@@ -1517,6 +1518,8 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (kci->modules_map.len)
list_add_tail(&kci->modules_map.node, &kci->phdrs);
+ kci->phnum = !!kci->kernel_map.len + !!kci->modules_map.len;
+
return 0;
}
@@ -1678,7 +1681,6 @@ int kcore_copy(const char *from_dir, const char *to_dir)
{
struct kcore kcore;
struct kcore extract;
- size_t count = 2;
int idx = 0, err = -1;
off_t offset = page_size, sz, modules_offset = 0;
struct kcore_copy_info kci = { .stext = 0, };
@@ -1705,10 +1707,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
if (kcore__init(&extract, extract_filename, kcore.elfclass, false))
goto out_kcore_close;
- if (!kci.modules_map.addr)
- count -= 1;
-
- if (kcore__copy_hdr(&kcore, &extract, count))
+ if (kcore__copy_hdr(&kcore, &extract, kci.phnum))
goto out_extract_close;
if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Calculate offset from phnum
2018-05-22 10:54 ` [PATCH V3 12/17] perf buildid-cache: kcore_copy: Calculate offset from phnum Adrian Hunter
@ 2018-05-24 5:42 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:42 UTC (permalink / raw)
To: linux-tip-commits
Cc: peterz, ak, tglx, acme, luto, dave.hansen, adrian.hunter, mingo,
hpa, linux-kernel, jolsa, alexander.shishkin, joro
Commit-ID: c9dd1d894958b81a329ec01e7dd03b92eca52789
Gitweb: https://git.kernel.org/tip/c9dd1d894958b81a329ec01e7dd03b92eca52789
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:40 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:41 -0300
perf kcore_copy: Calculate offset from phnum
In preparation to add more program headers, calculate offset from the
number of phdrs.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-13-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 4e7b71e8ac0e..4aec12102e19 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1682,7 +1682,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
struct kcore kcore;
struct kcore extract;
int idx = 0, err = -1;
- off_t offset = page_size, sz, modules_offset = 0;
+ off_t offset, sz, modules_offset = 0;
struct kcore_copy_info kci = { .stext = 0, };
char kcore_filename[PATH_MAX];
char extract_filename[PATH_MAX];
@@ -1710,6 +1710,10 @@ int kcore_copy(const char *from_dir, const char *to_dir)
if (kcore__copy_hdr(&kcore, &extract, kci.phnum))
goto out_extract_close;
+ offset = gelf_fsize(extract.elf, ELF_T_EHDR, 1, EV_CURRENT) +
+ gelf_fsize(extract.elf, ELF_T_PHDR, kci.phnum, EV_CURRENT);
+ offset = round_up(offset, page_size);
+
if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
kci.kernel_map.len))
goto out_extract_close;
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Layout sections
2018-05-22 10:54 ` [PATCH V3 13/17] perf buildid-cache: kcore_copy: Layout sections Adrian Hunter
@ 2018-05-24 5:43 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:43 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, luto, joro, ak, peterz, mingo, jolsa, dave.hansen,
hpa, alexander.shishkin, acme, tglx, adrian.hunter
Commit-ID: 15acef6c3727cfe0bc9d1f6b273cca46689e8cd8
Gitweb: https://git.kernel.org/tip/15acef6c3727cfe0bc9d1f6b273cca46689e8cd8
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:41 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:42 -0300
perf kcore_copy: Layout sections
In preparation to add more program headers, layout the relative offset
of each section.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-14-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 4aec12102e19..3e76a0efd15c 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1386,6 +1386,7 @@ static off_t kcore__write(struct kcore *kcore)
struct phdr_data {
off_t offset;
+ off_t rel;
u64 addr;
u64 len;
struct list_head node;
@@ -1404,6 +1405,9 @@ struct kcore_copy_info {
struct list_head phdrs;
};
+#define kcore_copy__for_each_phdr(k, p) \
+ list_for_each_entry((p), &(k)->phdrs, node)
+
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
u64 start)
{
@@ -1518,11 +1522,21 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (kci->modules_map.len)
list_add_tail(&kci->modules_map.node, &kci->phdrs);
- kci->phnum = !!kci->kernel_map.len + !!kci->modules_map.len;
-
return 0;
}
+static void kcore_copy__layout(struct kcore_copy_info *kci)
+{
+ struct phdr_data *p;
+ off_t rel = 0;
+
+ kcore_copy__for_each_phdr(kci, p) {
+ p->rel = rel;
+ rel += p->len;
+ kci->phnum += 1;
+ }
+}
+
static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
Elf *elf)
{
@@ -1558,7 +1572,12 @@ static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
if (kci->first_module && !kci->last_module_symbol)
return -1;
- return kcore_copy__read_maps(kci, elf);
+ if (kcore_copy__read_maps(kci, elf))
+ return -1;
+
+ kcore_copy__layout(kci);
+
+ return 0;
}
static int kcore_copy__copy_file(const char *from_dir, const char *to_dir,
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Iterate phdrs
2018-05-22 10:54 ` [PATCH V3 14/17] perf buildid-cache: kcore_copy: Iterate phdrs Adrian Hunter
@ 2018-05-24 5:43 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:43 UTC (permalink / raw)
To: linux-tip-commits
Cc: hpa, peterz, alexander.shishkin, ak, dave.hansen, acme,
adrian.hunter, tglx, joro, jolsa, mingo, luto, linux-kernel
Commit-ID: d2c959803c8843f64e419d833dc3722154c82492
Gitweb: https://git.kernel.org/tip/d2c959803c8843f64e419d833dc3722154c82492
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:42 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:42 -0300
perf kcore_copy: Iterate phdrs
In preparation to add more program headers, iterate phdrs instead of
assuming there is only one for the kernel text and one for the modules.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-15-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 3e76a0efd15c..91b8cfb045ec 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1701,10 +1701,11 @@ int kcore_copy(const char *from_dir, const char *to_dir)
struct kcore kcore;
struct kcore extract;
int idx = 0, err = -1;
- off_t offset, sz, modules_offset = 0;
+ off_t offset, sz;
struct kcore_copy_info kci = { .stext = 0, };
char kcore_filename[PATH_MAX];
char extract_filename[PATH_MAX];
+ struct phdr_data *p;
INIT_LIST_HEAD(&kci.phdrs);
@@ -1733,14 +1734,10 @@ int kcore_copy(const char *from_dir, const char *to_dir)
gelf_fsize(extract.elf, ELF_T_PHDR, kci.phnum, EV_CURRENT);
offset = round_up(offset, page_size);
- if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
- kci.kernel_map.len))
- goto out_extract_close;
+ kcore_copy__for_each_phdr(&kci, p) {
+ off_t offs = p->rel + offset;
- if (kci.modules_map.addr) {
- modules_offset = offset + kci.kernel_map.len;
- if (kcore__add_phdr(&extract, idx, modules_offset,
- kci.modules_map.addr, kci.modules_map.len))
+ if (kcore__add_phdr(&extract, idx++, offs, p->addr, p->len))
goto out_extract_close;
}
@@ -1748,14 +1745,12 @@ int kcore_copy(const char *from_dir, const char *to_dir)
if (sz < 0 || sz > offset)
goto out_extract_close;
- if (copy_bytes(kcore.fd, kci.kernel_map.offset, extract.fd, offset,
- kci.kernel_map.len))
- goto out_extract_close;
+ kcore_copy__for_each_phdr(&kci, p) {
+ off_t offs = p->rel + offset;
- if (modules_offset && copy_bytes(kcore.fd, kci.modules_map.offset,
- extract.fd, modules_offset,
- kci.modules_map.len))
- goto out_extract_close;
+ if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))
+ goto out_extract_close;
+ }
if (kcore_copy__compare_file(from_dir, to_dir, "modules"))
goto out_extract_close;
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Get rid of kernel_map
2018-05-22 10:54 ` [PATCH V3 15/17] perf buildid-cache: kcore_copy: Get rid of kernel_map Adrian Hunter
@ 2018-05-24 5:44 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:44 UTC (permalink / raw)
To: linux-tip-commits
Cc: jolsa, tglx, linux-kernel, ak, joro, adrian.hunter, acme, hpa,
dave.hansen, luto, mingo, alexander.shishkin, peterz
Commit-ID: b4503cdb67098b2f08320c2c83df758ea72a4431
Gitweb: https://git.kernel.org/tip/b4503cdb67098b2f08320c2c83df758ea72a4431
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:43 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:43 -0300
perf kcore_copy: Get rid of kernel_map
In preparation to add more program headers, get rid of kernel_map and
modules_map by moving ->kernel_map and ->modules_map to newly allocated
entries in the ->phdrs list.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-16-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 70 ++++++++++++++++++++++++++++++++------------
1 file changed, 52 insertions(+), 18 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 91b8cfb045ec..37d9324c277c 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1400,14 +1400,47 @@ struct kcore_copy_info {
u64 first_module;
u64 last_module_symbol;
size_t phnum;
- struct phdr_data kernel_map;
- struct phdr_data modules_map;
struct list_head phdrs;
};
#define kcore_copy__for_each_phdr(k, p) \
list_for_each_entry((p), &(k)->phdrs, node)
+static struct phdr_data *phdr_data__new(u64 addr, u64 len, off_t offset)
+{
+ struct phdr_data *p = zalloc(sizeof(*p));
+
+ if (p) {
+ p->addr = addr;
+ p->len = len;
+ p->offset = offset;
+ }
+
+ return p;
+}
+
+static struct phdr_data *kcore_copy_info__addnew(struct kcore_copy_info *kci,
+ u64 addr, u64 len,
+ off_t offset)
+{
+ struct phdr_data *p = phdr_data__new(addr, len, offset);
+
+ if (p)
+ list_add_tail(&p->node, &kci->phdrs);
+
+ return p;
+}
+
+static void kcore_copy__free_phdrs(struct kcore_copy_info *kci)
+{
+ struct phdr_data *p, *tmp;
+
+ list_for_each_entry_safe(p, tmp, &kci->phdrs, node) {
+ list_del(&p->node);
+ free(p);
+ }
+}
+
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
u64 start)
{
@@ -1487,15 +1520,18 @@ static int kcore_copy__parse_modules(struct kcore_copy_info *kci,
return 0;
}
-static void kcore_copy__map(struct phdr_data *p, u64 start, u64 end, u64 pgoff,
- u64 s, u64 e)
+static int kcore_copy__map(struct kcore_copy_info *kci, u64 start, u64 end,
+ u64 pgoff, u64 s, u64 e)
{
- if (p->addr || s < start || s >= end)
- return;
+ u64 len, offset;
+
+ if (s < start || s >= end)
+ return 0;
- p->addr = s;
- p->offset = (s - start) + pgoff;
- p->len = e < end ? e - s : end - s;
+ offset = (s - start) + pgoff;
+ len = e < end ? e - s : end - s;
+
+ return kcore_copy_info__addnew(kci, s, len, offset) ? 0 : -1;
}
static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
@@ -1503,11 +1539,12 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
struct kcore_copy_info *kci = data;
u64 end = start + len;
- kcore_copy__map(&kci->kernel_map, start, end, pgoff, kci->stext,
- kci->etext);
+ if (kcore_copy__map(kci, start, end, pgoff, kci->stext, kci->etext))
+ return -1;
- kcore_copy__map(&kci->modules_map, start, end, pgoff, kci->first_module,
- kci->last_module_symbol);
+ if (kcore_copy__map(kci, start, end, pgoff, kci->first_module,
+ kci->last_module_symbol))
+ return -1;
return 0;
}
@@ -1517,11 +1554,6 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
if (elf_read_maps(elf, true, kcore_copy__read_map, kci) < 0)
return -1;
- if (kci->kernel_map.len)
- list_add_tail(&kci->kernel_map.node, &kci->phdrs);
- if (kci->modules_map.len)
- list_add_tail(&kci->modules_map.node, &kci->phdrs);
-
return 0;
}
@@ -1773,6 +1805,8 @@ out_unlink_kallsyms:
if (err)
kcore_copy__unlink(to_dir, "kallsyms");
+ kcore_copy__free_phdrs(&kci);
+
return err;
}
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Copy x86 PTI entry trampoline sections
2018-05-22 10:54 ` [PATCH V3 16/17] perf buildid-cache: kcore_copy: Copy x86 PTI entry trampoline sections Adrian Hunter
@ 2018-05-24 5:44 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:44 UTC (permalink / raw)
To: linux-tip-commits
Cc: jolsa, luto, mingo, joro, alexander.shishkin, acme, hpa, tglx,
adrian.hunter, linux-kernel, peterz, ak, dave.hansen
Commit-ID: a1a3a0624e6cd0e2c46a7400800a5e687521a504
Gitweb: https://git.kernel.org/tip/a1a3a0624e6cd0e2c46a7400800a5e687521a504
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:44 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:43 -0300
perf kcore_copy: Copy x86 PTI entry trampoline sections
Identify and copy any sections for x86 PTI entry trampolines.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-17-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 37d9324c277c..584966913aeb 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1392,6 +1392,11 @@ struct phdr_data {
struct list_head node;
};
+struct sym_data {
+ u64 addr;
+ struct list_head node;
+};
+
struct kcore_copy_info {
u64 stext;
u64 etext;
@@ -1401,6 +1406,7 @@ struct kcore_copy_info {
u64 last_module_symbol;
size_t phnum;
struct list_head phdrs;
+ struct list_head syms;
};
#define kcore_copy__for_each_phdr(k, p) \
@@ -1441,6 +1447,29 @@ static void kcore_copy__free_phdrs(struct kcore_copy_info *kci)
}
}
+static struct sym_data *kcore_copy__new_sym(struct kcore_copy_info *kci,
+ u64 addr)
+{
+ struct sym_data *s = zalloc(sizeof(*s));
+
+ if (s) {
+ s->addr = addr;
+ list_add_tail(&s->node, &kci->syms);
+ }
+
+ return s;
+}
+
+static void kcore_copy__free_syms(struct kcore_copy_info *kci)
+{
+ struct sym_data *s, *tmp;
+
+ list_for_each_entry_safe(s, tmp, &kci->syms, node) {
+ list_del(&s->node);
+ free(s);
+ }
+}
+
static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
u64 start)
{
@@ -1471,6 +1500,9 @@ static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
return 0;
}
+ if (is_entry_trampoline(name) && !kcore_copy__new_sym(kci, start))
+ return -1;
+
return 0;
}
@@ -1538,6 +1570,7 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
{
struct kcore_copy_info *kci = data;
u64 end = start + len;
+ struct sym_data *sdat;
if (kcore_copy__map(kci, start, end, pgoff, kci->stext, kci->etext))
return -1;
@@ -1546,6 +1579,13 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
kci->last_module_symbol))
return -1;
+ list_for_each_entry(sdat, &kci->syms, node) {
+ u64 s = round_down(sdat->addr, page_size);
+
+ if (kcore_copy__map(kci, start, end, pgoff, s, s + len))
+ return -1;
+ }
+
return 0;
}
@@ -1740,6 +1780,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
struct phdr_data *p;
INIT_LIST_HEAD(&kci.phdrs);
+ INIT_LIST_HEAD(&kci.syms);
if (kcore_copy__copy_file(from_dir, to_dir, "kallsyms"))
return -1;
@@ -1806,6 +1847,7 @@ out_unlink_kallsyms:
kcore_copy__unlink(to_dir, "kallsyms");
kcore_copy__free_phdrs(&kci);
+ kcore_copy__free_syms(&kci);
return err;
}
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [tip:perf/core] perf kcore_copy: Amend the offset of sections that remap kernel text
2018-05-22 10:54 ` [PATCH V3 17/17] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text Adrian Hunter
@ 2018-05-24 5:45 ` tip-bot for Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-24 5:45 UTC (permalink / raw)
To: linux-tip-commits
Cc: hpa, peterz, tglx, alexander.shishkin, jolsa, joro, luto, ak,
acme, mingo, adrian.hunter, dave.hansen, linux-kernel
Commit-ID: 22916fdb9c50e8fb303bdcedca88fd8798a85844
Gitweb: https://git.kernel.org/tip/22916fdb9c50e8fb303bdcedca88fd8798a85844
Author: Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Tue, 22 May 2018 13:54:45 +0300
Committer: Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Wed, 23 May 2018 10:26:44 -0300
perf kcore_copy: Amend the offset of sections that remap kernel text
x86 PTI entry trampolines all map to the same physical page. If that is
reflected in the program headers of /proc/kcore, then do the same for the
copy of kcore.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1526986485-6562-18-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/symbol-elf.c | 53 ++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 584966913aeb..29770ea61768 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1390,6 +1390,7 @@ struct phdr_data {
u64 addr;
u64 len;
struct list_head node;
+ struct phdr_data *remaps;
};
struct sym_data {
@@ -1597,16 +1598,62 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
return 0;
}
+static void kcore_copy__find_remaps(struct kcore_copy_info *kci)
+{
+ struct phdr_data *p, *k = NULL;
+ u64 kend;
+
+ if (!kci->stext)
+ return;
+
+ /* Find phdr that corresponds to the kernel map (contains stext) */
+ kcore_copy__for_each_phdr(kci, p) {
+ u64 pend = p->addr + p->len - 1;
+
+ if (p->addr <= kci->stext && pend >= kci->stext) {
+ k = p;
+ break;
+ }
+ }
+
+ if (!k)
+ return;
+
+ kend = k->offset + k->len;
+
+ /* Find phdrs that remap the kernel */
+ kcore_copy__for_each_phdr(kci, p) {
+ u64 pend = p->offset + p->len;
+
+ if (p == k)
+ continue;
+
+ if (p->offset >= k->offset && pend <= kend)
+ p->remaps = k;
+ }
+}
+
static void kcore_copy__layout(struct kcore_copy_info *kci)
{
struct phdr_data *p;
off_t rel = 0;
+ kcore_copy__find_remaps(kci);
+
kcore_copy__for_each_phdr(kci, p) {
- p->rel = rel;
- rel += p->len;
+ if (!p->remaps) {
+ p->rel = rel;
+ rel += p->len;
+ }
kci->phnum += 1;
}
+
+ kcore_copy__for_each_phdr(kci, p) {
+ struct phdr_data *k = p->remaps;
+
+ if (k)
+ p->rel = p->offset - k->offset + k->rel;
+ }
}
static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
@@ -1821,6 +1868,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
kcore_copy__for_each_phdr(&kci, p) {
off_t offs = p->rel + offset;
+ if (p->remaps)
+ continue;
if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))
goto out_extract_close;
}
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH V3 00/17] perf tools and x86 PTI entry trampolines
2018-05-23 19:35 ` [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Arnaldo Carvalho de Melo
@ 2018-05-24 9:23 ` Adrian Hunter
0 siblings, 0 replies; 41+ messages in thread
From: Adrian Hunter @ 2018-05-24 9:23 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
Joerg Roedel, Jiri Olsa, linux-kernel, x86
On 23/05/18 22:35, Arnaldo Carvalho de Melo wrote:
> Em Tue, May 22, 2018 at 01:54:28PM +0300, Adrian Hunter escreveu:
>> Original Cover email:
>>
>> Perf tools do not know about x86 PTI entry trampolines - see example
>> below. These patches add a workaround, namely "perf tools: Workaround
>> missing maps for x86 PTI entry trampolines", which has the limitation
>> that it hard codes the addresses. Note that the workaround will work for
>> old kernels and old perf.data files, but not for future kernels if the
>> trampoline addresses are ever changed.
>>
>> At present, perf tools uses /proc/kallsyms to construct a memory map for
>> the kernel. Recording such a map in the perf.data file is necessary to
>> deal with kernel relocation and KASLR.
>>
>> While it is reasonable on its own terms, to add symbols for the trampolines
>> to /proc/kallsyms, the motivation here is to have perf tools use them to
>> create memory maps in the same fashion as is done for the kernel text.
>>
>> So the first 2 patches add symbols to /proc/kallsyms for the trampolines:
>>
>> kallsyms: Simplify update_iter_mod()
>> kallsyms, x86: Export addresses of syscall trampolines
>>
>> perf tools have the ability to use /proc/kcore (in conjunction with
>> /proc/kallsyms) as the kernel image. So the next 2 patches add program
>> headers for the trampolines to the kcore ELF:
>>
>> x86: Add entry trampolines to kcore
>> x86: kcore: Give entry trampolines all the same offset in kcore
>>
>> It is worth noting that, with the kcore changes alone, perf tools require
>> no changes to recognise the trampolines when using /proc/kcore.
>>
>> Similarly, if perf tools are used with a matching kallsyms only (by denying
>> access to /proc/kcore or a vmlinux image), then the kallsyms patches are
>> sufficient to recognise the trampolines with no changes needed to the
>> tools.
>>
>> However, in the general case, when using vmlinux or dealing with
>> relocations, perf tools needs memory maps for the trampolines. Because the
>> kernel text map is constructed as a special case, using the same approach
>> for the trampolines means treating them as a special case also, which
>> requires a number of changes to perf tools, and the remaining patches deal
>> with that.
>>
>>
>> Example: make a program that does lots of small syscalls e.g.
>>
>> $ cat uname_x_n.c
>>
>> #include <sys/utsname.h>
>> #include <stdlib.h>
>>
>> int main(int argc, char *argv[])
>> {
>> long n = argc > 1 ? strtol(argv[1], NULL, 0) : 0;
>> struct utsname u;
>>
>> while (n--)
>> uname(&u);
>>
>> return 0;
>> }
>>
>> and then:
>>
>> sudo perf record uname_x_n 100000
>> sudo perf report --stdio
>>
>> Before the changes, there are unknown symbols:
>>
>> # Overhead Command Shared Object Symbol
>> # ........ ......... ................ ..................................
>> #
>> 41.91% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
>> 19.22% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
>> 18.70% uname_x_n [unknown] [k] 0xfffffe00000e201b
>> 4.09% uname_x_n libc-2.19.so [.] __GI___uname
>> 3.08% uname_x_n [kernel.vmlinux] [k] do_syscall_64
>> 3.02% uname_x_n [unknown] [k] 0xfffffe00000e2025
>> 2.32% uname_x_n [kernel.vmlinux] [k] down_read
>> 2.27% uname_x_n ld-2.19.so [.] _dl_start
>> 1.97% uname_x_n [unknown] [k] 0xfffffe00000e201e
>> 1.25% uname_x_n [kernel.vmlinux] [k] up_read
>> 1.02% uname_x_n [unknown] [k] 0xfffffe00000e200c
>> 0.99% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
>> 0.16% uname_x_n [kernel.vmlinux] [k] flush_signal_handlers
>> 0.01% perf [kernel.vmlinux] [k] native_sched_clock
>> 0.00% perf [kernel.vmlinux] [k] native_write_msr
>>
>> After the changes there are not:
>>
>> # Overhead Command Shared Object Symbol
>> # ........ ......... ................ ..................................
>> #
>> 41.91% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
>> 24.70% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64_trampoline
>> 19.22% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
>> 4.09% uname_x_n libc-2.19.so [.] __GI___uname
>> 3.08% uname_x_n [kernel.vmlinux] [k] do_syscall_64
>> 2.32% uname_x_n [kernel.vmlinux] [k] down_read
>> 2.27% uname_x_n ld-2.19.so [.] _dl_start
>> 1.25% uname_x_n [kernel.vmlinux] [k] up_read
>> 0.99% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
>> 0.16% uname_x_n [kernel.vmlinux] [k] flush_signal_handlers
>> 0.01% perf [kernel.vmlinux] [k] native_sched_clock
>> 0.00% perf [kernel.vmlinux] [k] native_write_msr
>
> So, with just the userspace patches I get, recording with the new tool,
> and then report'ing with old and new tools:
>
> Before:
>
> [root@seventh c]# perf-4.17.rc6.ga048a0-torvalds.master report --stdio
> # To display the perf.data header info, please use --header/--header-only options.
> #
> #
> # Total Lost Samples: 0
> #
> # Samples: 83 of event 'cycles:ppp'
> # Event count (approx.): 86724689
> #
> # Overhead Command Shared Object Symbol
> # ........ ......... ................ ..................................
> #
> 35.12% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
> 20.86% uname_x_n [unknown] [k] 0xfffffe000005e01b
> 11.09% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
> 8.58% uname_x_n [kernel.vmlinux] [k] __x64_sys_newuname
> 4.93% uname_x_n libc-2.26.so [.] __GI___uname
> 2.92% uname_x_n ld-2.26.so [.] dl_main
> 2.66% uname_x_n [kernel.vmlinux] [k] __x86_indirect_thunk_rax
> 2.46% uname_x_n [kernel.vmlinux] [k] do_syscall_64
> 2.18% uname_x_n [unknown] [k] 0xfffffe000005e01e
> 2.17% uname_x_n uname_x_n [.] main
> 2.14% uname_x_n [unknown] [k] 0xfffffe000005e00c
> 1.98% uname_x_n [unknown] [k] 0xfffffe000005e025
> 1.37% uname_x_n [kernel.vmlinux] [k] down_read
> 1.27% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
> 0.23% uname_x_n [kernel.vmlinux] [k] get_random_u64
> 0.01% perf [kernel.vmlinux] [k] end_repeat_nmi
> 0.00% perf [kernel.vmlinux] [k] native_write_msr
>
>
> #
> # (Tip: Use --symfs <dir> if your symbol files are in non-standard locations)
> #
>
> After:
>
> [root@seventh c]# perf report --stdio
> # To display the perf.data header info, please use --header/--header-only options.
> #
> #
> # Total Lost Samples: 0
> #
> # Samples: 83 of event 'cycles:ppp'
> # Event count (approx.): 86724689
> #
> # Overhead Command Shared Object Symbol
> # ........ ......... ................ ..................................
> #
> 35.12% uname_x_n [kernel.vmlinux] [k] syscall_return_via_sysret
> 27.18% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64_trampoline
> 11.09% uname_x_n [kernel.vmlinux] [k] copy_user_enhanced_fast_string
> 8.58% uname_x_n [kernel.vmlinux] [k] __x64_sys_newuname
> 4.93% uname_x_n libc-2.26.so [.] __GI___uname
> 2.92% uname_x_n ld-2.26.so [.] dl_main
> 2.66% uname_x_n [kernel.vmlinux] [k] __x86_indirect_thunk_rax
> 2.46% uname_x_n [kernel.vmlinux] [k] do_syscall_64
> 2.17% uname_x_n uname_x_n [.] main
> 1.37% uname_x_n [kernel.vmlinux] [k] down_read
> 1.27% uname_x_n [kernel.vmlinux] [k] entry_SYSCALL_64
> 0.23% uname_x_n [kernel.vmlinux] [k] get_random_u64
> 0.01% perf [kernel.vmlinux] [k] end_repeat_nmi
> 0.00% perf [kernel.vmlinux] [k] native_write_msr
>
>
> #
> # (Tip: Generate a script for your data: perf script -g <lang>)
> #
> [root@seventh c]#
> [root@seventh c]#
>
> What am I missing while testing this,
perf.data maps come from reading kallsyms, so you need a new kernel to get
the maps recorded into perf.data.
If you use old tools with a new perf.data file and new kernel, then it will
work for kallsyms or kcore but not vmlinux. This is because the old tools
do not know how to use the maps to calculate the _entry_trampoline offset
for vmlinux.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH V3 00/17] perf tools and x86 PTI entry trampolines
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
` (17 preceding siblings ...)
2018-05-23 19:35 ` [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Arnaldo Carvalho de Melo
@ 2018-05-31 12:09 ` Adrian Hunter
2018-06-05 15:29 ` Arnaldo Carvalho de Melo
18 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-05-31 12:09 UTC (permalink / raw)
To: Thomas Gleixner, Arnaldo Carvalho de Melo
Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
Jiri Olsa, linux-kernel, x86
On 22/05/18 13:54, Adrian Hunter wrote:
> Hi
>
> Here is V3 of patches to support x86 PTI entry trampolines in perf tools.
>
> Patches also here:
> http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti-v3
> git://git.infradead.org/users/ahunter/linux-perf.git perf-tools-kpti-v3
>
Arnaldo has queued the tools patches, but there are still 3 kernel patches:
kallsyms: Simplify update_iter_mod()
kallsyms, x86: Export addresses of syscall trampolines
x86: Add entry trampolines to kcore
Are there any further comments on these? Can they be applied?
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH V3 00/17] perf tools and x86 PTI entry trampolines
2018-05-31 12:09 ` Adrian Hunter
@ 2018-06-05 15:29 ` Arnaldo Carvalho de Melo
2018-06-05 16:00 ` Peter Zijlstra
0 siblings, 1 reply; 41+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-06-05 15:29 UTC (permalink / raw)
To: Adrian Hunter
Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
Joerg Roedel, Jiri Olsa, linux-kernel, x86
Em Thu, May 31, 2018 at 03:09:38PM +0300, Adrian Hunter escreveu:
> On 22/05/18 13:54, Adrian Hunter wrote:
> > Hi
> >
> > Here is V3 of patches to support x86 PTI entry trampolines in perf tools.
> >
> > Patches also here:
> > http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti-v3
> > git://git.infradead.org/users/ahunter/linux-perf.git perf-tools-kpti-v3
> >
>
> Arnaldo has queued the tools patches, but there are still 3 kernel patches:
>
> kallsyms: Simplify update_iter_mod()
> kallsyms, x86: Export addresses of syscall trampolines
> x86: Add entry trampolines to kcore
>
> Are there any further comments on these? Can they be applied?
Would be interesting to have some acked-by from kernel folks :-\
- Arnaldo
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines
2018-05-22 10:54 ` [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines Adrian Hunter
@ 2018-06-05 16:00 ` Andi Kleen
2018-06-06 8:02 ` Adrian Hunter
0 siblings, 1 reply; 41+ messages in thread
From: Andi Kleen @ 2018-06-05 16:00 UTC (permalink / raw)
To: Adrian Hunter
Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Alexander Shishkin, Dave Hansen, Joerg Roedel, Jiri Olsa,
linux-kernel, x86
> +#ifdef CONFIG_X86_64
> +int arch_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
> + char *name)
> +{
> + unsigned int cpu, ncpu;
> +
> + if (symnum >= num_possible_cpus())
> + return -EINVAL;
> +
> + for (cpu = cpumask_first(cpu_possible_mask), ncpu = 0;
> + cpu < num_possible_cpus() && ncpu < symnum;
> + cpu = cpumask_next(cpu, cpu_possible_mask), ncpu++)
> + ;
That is max_t(unsigned, cpumask_last(cpu_possible_mask), symnum)
Rest and other kernel patches look good to me
Acked-by: Andi Kleen <ak@linux.intel.com>
-Andi
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH V3 00/17] perf tools and x86 PTI entry trampolines
2018-06-05 15:29 ` Arnaldo Carvalho de Melo
@ 2018-06-05 16:00 ` Peter Zijlstra
2018-06-05 16:04 ` Arnaldo Carvalho de Melo
0 siblings, 1 reply; 41+ messages in thread
From: Peter Zijlstra @ 2018-06-05 16:00 UTC (permalink / raw)
To: Arnaldo Carvalho de Melo
Cc: Adrian Hunter, Thomas Gleixner, Ingo Molnar, Andy Lutomirski,
H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
Joerg Roedel, Jiri Olsa, linux-kernel, x86
On Tue, Jun 05, 2018 at 12:29:43PM -0300, Arnaldo Carvalho de Melo wrote:
> > Arnaldo has queued the tools patches, but there are still 3 kernel patches:
> >
> > kallsyms: Simplify update_iter_mod()
> > kallsyms, x86: Export addresses of syscall trampolines
That one needs a changelog.
> > x86: Add entry trampolines to kcore
> >
> > Are there any further comments on these? Can they be applied?
>
> Would be interesting to have some acked-by from kernel folks :-\
Other than that, they look good I tihnk. The updatE_iter_mod thing took
me a few tries, so maybe something can be done to the changelog there
too, maybe explicitly mention the rule for the *mod_end things.
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH V3 00/17] perf tools and x86 PTI entry trampolines
2018-06-05 16:00 ` Peter Zijlstra
@ 2018-06-05 16:04 ` Arnaldo Carvalho de Melo
0 siblings, 0 replies; 41+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-06-05 16:04 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Adrian Hunter, Thomas Gleixner, Ingo Molnar, Andy Lutomirski,
H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
Joerg Roedel, Jiri Olsa, linux-kernel, x86
Em Tue, Jun 05, 2018 at 06:00:46PM +0200, Peter Zijlstra escreveu:
> On Tue, Jun 05, 2018 at 12:29:43PM -0300, Arnaldo Carvalho de Melo wrote:
> > > Arnaldo has queued the tools patches, but there are still 3 kernel patches:
> > >
> > > kallsyms: Simplify update_iter_mod()
> > > kallsyms, x86: Export addresses of syscall trampolines
>
> That one needs a changelog.
>
> > > x86: Add entry trampolines to kcore
> > >
> > > Are there any further comments on these? Can they be applied?
> >
> > Would be interesting to have some acked-by from kernel folks :-\
>
> Other than that, they look good I tihnk. The updatE_iter_mod thing took
> me a few tries, so maybe something can be done to the changelog there
> too, maybe explicitly mention the rule for the *mod_end things.
I was feeling daft, good to see you found that difficult to follow 8-)
- Arnaldo
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines
2018-06-05 16:00 ` Andi Kleen
@ 2018-06-06 8:02 ` Adrian Hunter
2018-06-06 10:50 ` Peter Zijlstra
0 siblings, 1 reply; 41+ messages in thread
From: Adrian Hunter @ 2018-06-06 8:02 UTC (permalink / raw)
To: Andi Kleen
Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
Alexander Shishkin, Dave Hansen, Joerg Roedel, Jiri Olsa,
linux-kernel, x86
On 05/06/18 19:00, Andi Kleen wrote:
>> +#ifdef CONFIG_X86_64
>> +int arch_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
>> + char *name)
>> +{
>> + unsigned int cpu, ncpu;
>> +
>> + if (symnum >= num_possible_cpus())
>> + return -EINVAL;
>> +
>> + for (cpu = cpumask_first(cpu_possible_mask), ncpu = 0;
>> + cpu < num_possible_cpus() && ncpu < symnum;
>> + cpu = cpumask_next(cpu, cpu_possible_mask), ncpu++)
>> + ;
>
> That is max_t(unsigned, cpumask_last(cpu_possible_mask), symnum)
I think it should be:
- cpu < num_possible_cpus() && ncpu < symnum;
+ ncpu < symnum;
Alex?
>
> Rest and other kernel patches look good to me
>
> Acked-by: Andi Kleen <ak@linux.intel.com>
>
> -Andi
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines
2018-06-06 8:02 ` Adrian Hunter
@ 2018-06-06 10:50 ` Peter Zijlstra
0 siblings, 0 replies; 41+ messages in thread
From: Peter Zijlstra @ 2018-06-06 10:50 UTC (permalink / raw)
To: Adrian Hunter
Cc: Andi Kleen, Thomas Gleixner, Arnaldo Carvalho de Melo,
Ingo Molnar, Andy Lutomirski, H. Peter Anvin, Alexander Shishkin,
Dave Hansen, Joerg Roedel, Jiri Olsa, linux-kernel, x86
On Wed, Jun 06, 2018 at 11:02:08AM +0300, Adrian Hunter wrote:
> On 05/06/18 19:00, Andi Kleen wrote:
> >> +#ifdef CONFIG_X86_64
> >> +int arch_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
> >> + char *name)
> >> +{
> >> + unsigned int cpu, ncpu;
> >> +
> >> + if (symnum >= num_possible_cpus())
> >> + return -EINVAL;
> >> +
> >> + for (cpu = cpumask_first(cpu_possible_mask), ncpu = 0;
> >> + cpu < num_possible_cpus() && ncpu < symnum;
> >> + cpu = cpumask_next(cpu, cpu_possible_mask), ncpu++)
> >> + ;
> >
> > That is max_t(unsigned, cpumask_last(cpu_possible_mask), symnum)
>
> I think it should be:
>
> - cpu < num_possible_cpus() && ncpu < symnum;
> + ncpu < symnum;
Since we're bike-shedding:
for_each_possible_cpu(cpu) {
if (ncpu++ >= symnum)
break;
}
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2018-06-06 10:51 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-22 10:54 [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 01/17] kallsyms: Simplify update_iter_mod() Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 02/17] kallsyms, x86: Export addresses of syscall trampolines Adrian Hunter
2018-06-05 16:00 ` Andi Kleen
2018-06-06 8:02 ` Adrian Hunter
2018-06-06 10:50 ` Peter Zijlstra
2018-05-22 10:54 ` [PATCH V3 03/17] x86: Add entry trampolines to kcore Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 04/17] perf tools: Add machine__nr_cpus_avail() Adrian Hunter
2018-05-24 5:38 ` [tip:perf/core] perf machine: Add nr_cpus_avail() tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 05/17] perf tools: Workaround missing maps for x86 PTI entry trampolines Adrian Hunter
2018-05-24 5:38 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 06/17] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols Adrian Hunter
2018-05-24 5:39 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 07/17] perf tools: Allow for extra kernel maps Adrian Hunter
2018-05-24 5:39 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 08/17] perf tools: Create maps for x86 PTI entry trampolines Adrian Hunter
2018-05-24 5:40 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 09/17] perf tools: Synthesize and process mmap events " Adrian Hunter
2018-05-24 5:40 ` [tip:perf/core] perf machine: " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 10/17] perf buildid-cache: kcore_copy: Keep phdr data in a list Adrian Hunter
2018-05-24 5:41 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 11/17] perf buildid-cache: kcore_copy: Keep a count of phdrs Adrian Hunter
2018-05-24 5:42 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 12/17] perf buildid-cache: kcore_copy: Calculate offset from phnum Adrian Hunter
2018-05-24 5:42 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 13/17] perf buildid-cache: kcore_copy: Layout sections Adrian Hunter
2018-05-24 5:43 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 14/17] perf buildid-cache: kcore_copy: Iterate phdrs Adrian Hunter
2018-05-24 5:43 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 15/17] perf buildid-cache: kcore_copy: Get rid of kernel_map Adrian Hunter
2018-05-24 5:44 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 16/17] perf buildid-cache: kcore_copy: Copy x86 PTI entry trampoline sections Adrian Hunter
2018-05-24 5:44 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-22 10:54 ` [PATCH V3 17/17] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text Adrian Hunter
2018-05-24 5:45 ` [tip:perf/core] perf " tip-bot for Adrian Hunter
2018-05-23 19:35 ` [PATCH V3 00/17] perf tools and x86 PTI entry trampolines Arnaldo Carvalho de Melo
2018-05-24 9:23 ` Adrian Hunter
2018-05-31 12:09 ` Adrian Hunter
2018-06-05 15:29 ` Arnaldo Carvalho de Melo
2018-06-05 16:00 ` Peter Zijlstra
2018-06-05 16:04 ` Arnaldo Carvalho de Melo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).