linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Arnaldo Carvalho de Melo <acme@redhat.com>
To: Josh Hunt <johunt@akamai.com>
Cc: Jiri Olsa <jolsa@redhat.com>,
	john@metanate.com, jolsa@kernel.org,
	alexander.shishkin@linux.intel.com, khlebnikov@yandex-team.ru,
	namhyung@kernel.org, peterz@infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: 4.19 dwarf unwinding broken
Date: Thu, 3 Oct 2019 13:04:04 -0300	[thread overview]
Message-ID: <20191003160404.GG3531@redhat.com> (raw)
In-Reply-To: <d0e0a675-1344-a47b-c27e-ea05230d88b8@akamai.com>

Em Thu, Oct 03, 2019 at 08:17:38AM -0700, Josh Hunt escreveu:
> On 10/3/19 3:03 AM, Jiri Olsa wrote:
> >On Thu, Oct 03, 2019 at 12:54:09AM -0700, Josh Hunt wrote:
> >>The following commit is breaking dwarf unwinding on 4.19 kernels:
> >
> >how?
> 
> When doing something like:
> perf record -p $(cat /var/run/app.pid) -g --call-graph dwarf -F 999
> -- sleep 3
> 
> with 4.19.75 perf I see things like:
> 
> app_Thr00 26247 1810131.375329:     168288 cycles:ppp:
> 
> app_Thr01 26767 1810131.377449:     344415 cycles:ppp:
> 
> uvm:WorkerThread 26746 1810131.383052:        504 cycles:ppp:
>         ffffffff9f77cce0 _raw_spin_lock+0x10 (/boot/vmlinux-4.19.46)
>         ffffffff9f181527 __perf_event_task_sched_in+0xf7
> (/boot/vmlinux-4.19.46)
>         ffffffff9f09a7b8 finish_task_switch+0x158 (/boot/vmlinux-4.19.46)
>         ffffffff9f778276 __schedule+0x2f6 (/boot/vmlinux-4.19.46)
>         ffffffff9f7787f2 schedule+0x32 (/boot/vmlinux-4.19.46)
>         ffffffff9f77bb0a schedule_hrtimeout_range_clock+0x8a
> (/boot/vmlinux-4.19.46)
>         ffffffff9f22ea12 poll_schedule_timeout.constprop.6+0x42
> (/boot/vmlinux-4.19.46)
>         ffffffff9f22eeeb do_sys_poll+0x4ab (/boot/vmlinux-4.19.46)
>         ffffffff9f22fb7b __se_sys_poll+0x5b (/boot/vmlinux-4.19.46)
>         ffffffff9f0023de do_syscall_64+0x4e (/boot/vmlinux-4.19.46)
>         ffffffff9f800088 entry_SYSCALL_64+0x68 (/boot/vmlinux-4.19.46)
> ---
> 
> and with 4.19.75 perf with e5adfc3e7e77 reverted those empty call
> stacks go away and also other call stacks show more thread details:
> 
> uvm:WorkerThread 26746 1810207.336391:          1 cycles:ppp:
>         ffffffff9f181505 __perf_event_task_sched_in+0xd5
> (/boot/vmlinux-4.19.46)
>         ffffffff9f09a7b8 finish_task_switch+0x158 (/boot/vmlinux-4.19.46)
>         ffffffff9f778276 __schedule+0x2f6 (/boot/vmlinux-4.19.46)
>         ffffffff9f7787f2 schedule+0x32 (/boot/vmlinux-4.19.46)
>         ffffffff9f77bb0a schedule_hrtimeout_range_clock+0x8a
> (/boot/vmlinux-4.19.46)
>         ffffffff9f22ea12 poll_schedule_timeout.constprop.6+0x42
> (/boot/vmlinux-4.19.46)
>         ffffffff9f22eeeb do_sys_poll+0x4ab (/boot/vmlinux-4.19.46)
>         ffffffff9f22fb7b __se_sys_poll+0x5b (/boot/vmlinux-4.19.46)
>         ffffffff9f0023de do_syscall_64+0x4e (/boot/vmlinux-4.19.46)
>         ffffffff9f800088 entry_SYSCALL_64+0x68 (/boot/vmlinux-4.19.46)
>             7f7ef3f5c90d [unknown] (/lib/x86_64-linux-gnu/libc-2.23.so)
>                  3eb5c99 poll+0xc9 (inlined)
>                  3eb5c99 colib::ipc::EventFd::wait+0xc9
> (/usr/local/bin/app)
>                  3296779 uvm::WorkerThread::run+0x129 (/usr/local/bin/app)
>         ffffffffffffffff [unknown] ([unknown])
> 
> They also look the same as earlier kernel versions we have running.
> 
> In addition reading e8ba2906f6b's changelog sounded very similar to
> what I was seeing. This application launches a # of threads and is
> definitely already running before the invocation of perf.
> 
> Thanks for looking at this.


So, if at __event__synthesize_thread()


                if (pid == tgid &&
                    perf_event__synthesize_mmap_events(tool, mmap_event, pid, tgid,
                                                       process, machine, mmap_data))


We did something like:

	if (pid != tgid && machine__find_thread(tgid, tgid) == NULL) {
		struct thread *t = thread__new(tgid, tgid);

		then use the info for pid to get synthesize the thread
                group leader so that the get the sharing we need?
	}


- Arnaldo

      reply	other threads:[~2019-10-03 16:04 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-03  7:54 4.19 dwarf unwinding broken Josh Hunt
2019-10-03 10:03 ` Jiri Olsa
2019-10-03 15:17   ` Josh Hunt
2019-10-03 16:04     ` Arnaldo Carvalho de Melo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191003160404.GG3531@redhat.com \
    --to=acme@redhat.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=john@metanate.com \
    --cc=johunt@akamai.com \
    --cc=jolsa@kernel.org \
    --cc=jolsa@redhat.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).