Hi All, This is great. Thank you very much for the information. -Brian From: Francis Giraldeau [mailto:francis.giraldeau@gmail.com] Sent: Thursday, March 12, 2015 11:34 AM To: Mathieu Desnoyers Cc: Brian Robbins; lttng-dev@lists.lttng.org Subject: Re: [lttng-dev] Userspace Tracing and Backtraces 2015-03-10 21:47 GMT-04:00 Mathieu Desnoyers >: Francis: Did you define UNW_LOCAL_ONLY before including the libunwind header in your benchmarks ? (see http://www.nongnu.org/libunwind/man/libunwind%283%29.html) The seems to change performance dramatically according to the documentation. Yes, this is the case. Time to unwind is higher at the beginning (probably related to internal cache build), and also vary according to call-stack depth. Agreed on having the backtrace as a context. The main question left is to figure out if we want to call libunwind from within the traced application execution context. Unfortunately, libunwind is not reentrant wrt signals. This is already a good argument for not calling it from within a tracepoint. I wonder if the authors of libunwind would be open to make it signal-reentrant in the future (not by disabling signals, but rather by keeping a TLS nesting counter, and returning an error if nested, for performance considerations). The functions unw_init_local() and unw_step() are signal safe [1]. The critical sections are protected using lock_acquire() that blocks all signals before taking the mutex, which prevent the recursion. #define lock_acquire(l,m) \ do { \ SIGPROCMASK (SIG_SETMASK, &unwi_full_mask, &(m)); \ mutex_lock (l); \ } while (0) #define lock_release(l,m) \ do { \ mutex_unlock (l); \ SIGPROCMASK (SIG_SETMASK, &(m), NULL); \ } while (0) To understand the implications, I did a small program to study nested signals [2], where a signal is sent from within a signal, or when segmentation fault occurs in a signal handler. Blocking a signal differs it when it is unblocked, while ignored signals are discarded. Blocked signals that can't be ignored have their default behaviour. It prevents a possible deadlock, let's say if lock_acquire() was nesting with a custom SIGSEGV handler trying to get the same lock. So, let's say that instead of blocking signals, we have a per-thread mutex, that returns if try_lock() fails. It would be faster, but from the user's point of view, the backtrace will be dropped randomly. I would prefer it a bit slower, but reliable. In addition, could it be possible that TLS is not signal safe [3]? or using the perf capture mechanism that you describe below? Perf is peeking at the userspace from kernel space, it's another story. I guess that libunwind was not ported to the kernel because it is a large chunk of complicated code that performs a lot of I/O and computation, while copying a portion of the stack is really about KISS and low runtime overhead. If using libunwind does not work out, another alternative I would consider would be to copy the stack like perf is doing from the kernel. However, in the spirit of compacting trace data, I would be tempted to do the following if we go down that route: check each pointer-aligned address for its content. If it looks like a pointer to an executable memory area (library, executable, or JIT'd code), we keep it. Else, we zero this information (not needed). We can then do a RLE-alike compression on the zeroes, so we can keep the layout of the stack after uncompression. Interesting! For comparison, here is a perf event [4] that shows there is a lot of room for reducing the event size. We should check if discarding other saved register values on the stack impacts restoring the instruction pointer register. Doing the unwind offline also solves signal safety, should be fast and scalable. Francis [1] http://www.nongnu.org/libunwind/man/unw_init_local(3).html [2] https://gist.github.com/giraldeau/98f08161e83a7ab800ea [3] https://sourceware.org/glibc/wiki/TLSandSignals [4] http://pastebin.com/sByfXXAQ