From: Alexey Budankov <alexey.budankov@linux.intel.com>
To: Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@redhat.com>, Namhyung Kim <namhyung@kernel.org>,
Andi Kleen <ak@linux.intel.com>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: [PATCH v7 0/2]: perf: reduce data loss when profiling highly parallel CPU bound workloads
Date: Wed, 5 Sep 2018 10:16:42 +0300 [thread overview]
Message-ID: <1fc1fc5b-a8cc-2b05-d43c-692e58855c81@linux.intel.com> (raw)
Currently in record mode the tool implements trace writing serially.
The algorithm loops over mapped per-cpu data buffers and stores
ready data chunks into a trace file using write() system call.
At some circumstances the kernel may lack free space in a buffer
because the other buffer's half is not yet written to disk due to
some other buffer's data writing by the tool at the moment.
Thus serial trace writing implementation may cause the kernel
to loose profiling data and that is what observed when profiling
highly parallel CPU bound workloads on machines with big number
of cores.
Experiment with profiling matrix multiplication code executing 128
threads on Intel Xeon Phi (KNM) with 272 cores, like below,
demonstrates data loss metrics value of 98%:
/usr/bin/time perf record -o /tmp/perf-ser.data -a -N -B -T -R -g \
--call-graph dwarf,1024 --user-regs=IP,SP,BP \
--switch-events -e cycles,instructions,ref-cycles,software/period=1,name=cs,config=0x3/Duk -- \
matrix.gcc
Data loss metrics is the ratio lost_time/elapsed_time where
lost_time is the sum of time intervals containing PERF_RECORD_LOST
records and elapsed_time is the elapsed application run time
under profiling.
Applying asynchronous trace streaming thru Posix AIO API
(http://man7.org/linux/man-pages/man7/aio.7.html)
lowers data loss metrics value providing 2x improvement -
lowering 98% loss to almost 0%.
---
Alexey Budankov (2):
perf util: map data buffer for preserving collected data
perf record: enable asynchronous trace writing
tools/perf/builtin-record.c | 197 +++++++++++++++++++++++++++++++++++++++++++-
tools/perf/perf.h | 1 +
tools/perf/util/evlist.c | 7 +-
tools/perf/util/evlist.h | 3 +-
tools/perf/util/mmap.c | 110 +++++++++++++++++++++----
tools/perf/util/mmap.h | 10 ++-
6 files changed, 302 insertions(+), 26 deletions(-)
---
Changes in v7:
- implemented handling record.aio setting from perfconfig file
Changes in v6:
- adjusted setting of priorities for cblocks;
- handled errno == EAGAIN case from aio_write() return;
Changes in v5:
- resolved livelock on perf record -e intel_pt// -- dd if=/dev/zero of=/dev/null count=100000
- data loss metrics decreased from 25% to 2x in trialed configuration;
- reshaped layout of data structures;
- implemented --aio option;
- avoided nanosleep() prior calling aio_suspend();
- switched to per-cpu aio multi buffer record__aio_sync();
- record_mmap_read_sync() now does global sync just before
switching trace file or collection stop;
Changes in v4:
- converted mmap()/munmap() to malloc()/free() for mmap->data buffer management
- converted void *bf to struct perf_mmap *md in signatures
- written comment in perf_mmap__push() just before perf_mmap__get();
- written comment in record__mmap_read_sync() on possible restarting
of aio_write() operation and releasing perf_mmap object after all;
- added perf_mmap__put() for the cases of failed aio_write();
Changes in v3:
- written comments about nanosleep(0.5ms) call prior aio_suspend()
to cope with intrusiveness of its implementation in glibc;
- written comments about rationale behind coping profiling data
into mmap->data buffer;
Changes in v2:
- converted zalloc() to calloc() for allocation of mmap_aio array,
- cleared typo and adjusted fallback branch code;
next reply other threads:[~2018-09-05 7:16 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-05 7:16 Alexey Budankov [this message]
2018-09-05 7:19 ` [PATCH v7 1/2]: perf util: map data buffer for preserving collected data Alexey Budankov
2018-09-06 11:04 ` Jiri Olsa
2018-09-06 11:50 ` Alexey Budankov
2018-09-06 11:04 ` Jiri Olsa
2018-09-06 11:54 ` Alexey Budankov
2018-09-05 7:39 ` [PATCH v7 2/2]: perf record: enable asynchronous trace writing Alexey Budankov
2018-09-06 11:04 ` Jiri Olsa
2018-09-06 11:57 ` Alexey Budankov
2018-09-06 11:04 ` Jiri Olsa
2018-09-06 11:58 ` Alexey Budankov
2018-09-06 11:04 ` Jiri Olsa
2018-09-06 11:59 ` Alexey Budankov
2018-09-06 11:04 ` Jiri Olsa
2018-09-06 12:09 ` Alexey Budankov
2018-09-05 11:28 ` [PATCH v7 0/2]: perf: reduce data loss when profiling highly parallel CPU bound workloads Jiri Olsa
2018-09-05 17:37 ` Alexey Budankov
2018-09-05 18:51 ` Arnaldo Carvalho de Melo
2018-09-06 6:03 ` Alexey Budankov
2018-09-06 8:14 ` Jiri Olsa
2018-09-06 8:20 ` Alexey Budankov
2018-09-06 6:59 ` Alexey Budankov
2018-09-06 6:57 ` Alexey Budankov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1fc1fc5b-a8cc-2b05-d43c-692e58855c81@linux.intel.com \
--to=alexey.budankov@linux.intel.com \
--cc=acme@kernel.org \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).