* [PATCH v3 0/2] perf kvm stat live: Copy events @ 2014-10-02 16:38 Alexander Yarygin 2014-10-02 16:38 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 2014-10-02 16:38 ` [PATCH 2/2] perf kvm stat live: Enable events copying Alexander Yarygin 0 siblings, 2 replies; 14+ messages in thread From: Alexander Yarygin @ 2014-10-02 16:38 UTC (permalink / raw) To: linux-kernel Cc: Alexander Yarygin, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian Hello, This is a fix of 'perf kvm stat live' crash when it tries to parse events that have been already overwritten by the kernel. Patches - 1/2 adds an option to copy events when they are pushed to the samples queue. The patch is based on the patch by David Ahern (https://lkml.org/lkml/2013/9/6/388) - 2/2 enables the copying for perf kvm stat live. Changes in v3: - move repetitive code into functions Changes in v2: - the option to copy events is now a part of ordered_events - use memdup() instead malloc()/memcpy() - events alocations are under the report.queue-size limit Alexander Yarygin (2): perf tools: Add option to copy events when queueing perf kvm stat live: Enable events copying tools/perf/builtin-kvm.c | 1 + tools/perf/util/ordered-events.c | 49 ++++++++++++++++++++++++++++++++++++---- tools/perf/util/ordered-events.h | 10 +++++++- tools/perf/util/session.c | 5 ++-- 4 files changed, 57 insertions(+), 8 deletions(-) -- 1.9.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 16:38 [PATCH v3 0/2] perf kvm stat live: Copy events Alexander Yarygin @ 2014-10-02 16:38 ` Alexander Yarygin 2014-10-03 4:34 ` Ingo Molnar 2014-10-03 7:33 ` Jiri Olsa 2014-10-02 16:38 ` [PATCH 2/2] perf kvm stat live: Enable events copying Alexander Yarygin 1 sibling, 2 replies; 14+ messages in thread From: Alexander Yarygin @ 2014-10-02 16:38 UTC (permalink / raw) To: linux-kernel Cc: Alexander Yarygin, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian When processing events the session code has an ordered samples queue which is used to time-sort events coming in across multiple mmaps. At a later point in time samples on the queue are flushed up to some timestamp at which point the event is actually processed. When analyzing events live (ie., record/analysis path in the same command) there is a race that leads to corrupted events and parse errors which cause perf to terminate. The problem is that when the event is placed in the ordered samples queue it is only a reference to the event which is really sitting in the mmap buffer. Even though the event is queued for later processing the mmap tail pointer is updated which indicates to the kernel that the event has been processed. The race is flushing the event from the queue before it gets overwritten by some other event. For commands trying to process events live (versus just writing to a file) and processing a high rate of events this leads to parse failures and perf terminates. Examples hitting this problem are 'perf kvm stat live', especially with nested VMs which generate 100,000+ traces per second, and a command processing scheduling events with a high rate of context switching -- e.g., running 'perf bench sched pipe'. This patch offers live commands an option to copy the event when it is placed in the ordered samples queue. Based on a patch from David Ahern <dsahern@gmail.com> Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> --- tools/perf/util/ordered-events.c | 51 ++++++++++++++++++++++++++++++++++++---- tools/perf/util/ordered-events.h | 10 +++++++- tools/perf/util/session.c | 5 ++-- 3 files changed, 58 insertions(+), 8 deletions(-) diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c index 706ce1a..06d53ee 100644 --- a/tools/perf/util/ordered-events.c +++ b/tools/perf/util/ordered-events.c @@ -1,5 +1,6 @@ #include <linux/list.h> #include <linux/compiler.h> +#include <linux/string.h> #include "ordered-events.h" #include "evlist.h" #include "session.h" @@ -57,11 +58,45 @@ static void queue_event(struct ordered_events *oe, struct ordered_event *new) } } +static union perf_event *__dup_event(struct ordered_events *oe, + union perf_event *event) +{ + union perf_event *new_event = NULL; + + if (oe->cur_alloc_size < oe->max_alloc_size) { + new_event = memdup(event, event->header.size); + if (new_event) + oe->cur_alloc_size += event->header.size; + } + + return new_event; +} + +static union perf_event *dup_event(struct ordered_events *oe, + union perf_event *event) +{ + return oe->copy_on_queue ? __dup_event(oe, event) : event; +} + +static void free_dup_event(struct ordered_events *oe, union perf_event *event) +{ + if (oe->copy_on_queue) { + oe->cur_alloc_size -= event->header.size; + free(event); + } +} + #define MAX_SAMPLE_BUFFER (64 * 1024 / sizeof(struct ordered_event)) -static struct ordered_event *alloc_event(struct ordered_events *oe) +static struct ordered_event *alloc_event(struct ordered_events *oe, + union perf_event *event) { struct list_head *cache = &oe->cache; struct ordered_event *new = NULL; + union perf_event *new_event; + + new_event = dup_event(oe, event); + if (!new_event) + return NULL; if (!list_empty(cache)) { new = list_entry(cache->next, struct ordered_event, list); @@ -74,8 +109,10 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) size_t size = MAX_SAMPLE_BUFFER * sizeof(*new); oe->buffer = malloc(size); - if (!oe->buffer) + if (!oe->buffer) { + free_dup_event(oe, new_event); return NULL; + } pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n", oe->cur_alloc_size, size, oe->max_alloc_size); @@ -90,15 +127,19 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size); } + new->event = new_event; + return new; } struct ordered_event * -ordered_events__new(struct ordered_events *oe, u64 timestamp) +ordered_events__new(struct ordered_events *oe, u64 timestamp, + union perf_event *event) { struct ordered_event *new; - new = alloc_event(oe); + new = alloc_event(oe, event); + if (new) { new->timestamp = timestamp; queue_event(oe, new); @@ -111,6 +152,7 @@ void ordered_events__delete(struct ordered_events *oe, struct ordered_event *eve { list_move(&event->list, &oe->cache); oe->nr_events--; + free_dup_event(oe, event->event); } static int __ordered_events__flush(struct perf_session *s, @@ -240,6 +282,7 @@ void ordered_events__free(struct ordered_events *oe) event = list_entry(oe->to_free.next, struct ordered_event, list); list_del(&event->list); + free_dup_event(oe, event->event); free(event); } } diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h index 3b2f205..7b8f9b0 100644 --- a/tools/perf/util/ordered-events.h +++ b/tools/perf/util/ordered-events.h @@ -34,9 +34,11 @@ struct ordered_events { int buffer_idx; unsigned int nr_events; enum oe_flush last_flush_type; + bool copy_on_queue; }; -struct ordered_event *ordered_events__new(struct ordered_events *oe, u64 timestamp); +struct ordered_event *ordered_events__new(struct ordered_events *oe, u64 timestamp, + union perf_event *event); void ordered_events__delete(struct ordered_events *oe, struct ordered_event *event); int ordered_events__flush(struct perf_session *s, struct perf_tool *tool, enum oe_flush how); @@ -48,4 +50,10 @@ void ordered_events__set_alloc_size(struct ordered_events *oe, u64 size) { oe->max_alloc_size = size; } + +static inline +void ordered_events__set_copy_on_queue(struct ordered_events *oe, bool copy) +{ + oe->copy_on_queue = copy; +} #endif /* __ORDERED_EVENTS_H */ diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 6d2d50d..976064d 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -532,17 +532,16 @@ int perf_session_queue_event(struct perf_session *s, union perf_event *event, return -EINVAL; } - new = ordered_events__new(oe, timestamp); + new = ordered_events__new(oe, timestamp, event); if (!new) { ordered_events__flush(s, tool, OE_FLUSH__HALF); - new = ordered_events__new(oe, timestamp); + new = ordered_events__new(oe, timestamp, event); } if (!new) return -ENOMEM; new->file_offset = file_offset; - new->event = event; return 0; } -- 1.9.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 16:38 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin @ 2014-10-03 4:34 ` Ingo Molnar 2014-10-03 6:50 ` Jiri Olsa 2014-10-03 7:33 ` Jiri Olsa 1 sibling, 1 reply; 14+ messages in thread From: Ingo Molnar @ 2014-10-03 4:34 UTC (permalink / raw) To: Alexander Yarygin Cc: linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian * Alexander Yarygin <yarygin@linux.vnet.ibm.com> wrote: > When processing events the session code has an ordered samples > queue which is used to time-sort events coming in across > multiple mmaps. At a later point in time samples on the queue > are flushed up to some timestamp at which point the event is > actually processed. > > When analyzing events live (ie., record/analysis path in the > same command) there is a race that leads to corrupted events > and parse errors which cause perf to terminate. The problem is > that when the event is placed in the ordered samples queue it > is only a reference to the event which is really sitting in the > mmap buffer. Even though the event is queued for later > processing the mmap tail pointer is updated which indicates to > the kernel that the event has been processed. The race is > flushing the event from the queue before it gets overwritten by > some other event. For commands trying to process events live > (versus just writing to a file) and processing a high rate of > events this leads to parse failures and perf terminates. > > Examples hitting this problem are 'perf kvm stat live', > especially with nested VMs which generate 100,000+ traces per > second, and a command processing scheduling events with a high > rate of context switching -- e.g., running 'perf bench sched > pipe'. > > This patch offers live commands an option to copy the event > when it is placed in the ordered samples queue. What's the performance effect of this - i.e. by how much does CPU use increase due to copying the events? Wouldn't it be faster to fix this problem by updating the mmap tail pointer only once the event has truly been consumed? Thanks, Ingo ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-03 4:34 ` Ingo Molnar @ 2014-10-03 6:50 ` Jiri Olsa 2014-10-03 8:47 ` Ingo Molnar 0 siblings, 1 reply; 14+ messages in thread From: Jiri Olsa @ 2014-10-03 6:50 UTC (permalink / raw) To: Ingo Molnar Cc: Alexander Yarygin, linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian On Fri, Oct 03, 2014 at 06:34:21AM +0200, Ingo Molnar wrote: > > * Alexander Yarygin <yarygin@linux.vnet.ibm.com> wrote: > > > When processing events the session code has an ordered samples > > queue which is used to time-sort events coming in across > > multiple mmaps. At a later point in time samples on the queue > > are flushed up to some timestamp at which point the event is > > actually processed. > > > > When analyzing events live (ie., record/analysis path in the > > same command) there is a race that leads to corrupted events > > and parse errors which cause perf to terminate. The problem is > > that when the event is placed in the ordered samples queue it > > is only a reference to the event which is really sitting in the > > mmap buffer. Even though the event is queued for later > > processing the mmap tail pointer is updated which indicates to > > the kernel that the event has been processed. The race is > > flushing the event from the queue before it gets overwritten by > > some other event. For commands trying to process events live > > (versus just writing to a file) and processing a high rate of > > events this leads to parse failures and perf terminates. > > > > Examples hitting this problem are 'perf kvm stat live', > > especially with nested VMs which generate 100,000+ traces per > > second, and a command processing scheduling events with a high > > rate of context switching -- e.g., running 'perf bench sched > > pipe'. > > > > This patch offers live commands an option to copy the event > > when it is placed in the ordered samples queue. > > What's the performance effect of this - i.e. by how much does CPU > use increase due to copying the events? > > Wouldn't it be faster to fix this problem by updating the mmap > tail pointer only once the event has truly been consumed? Alexander mentioned he'd loose data, because of userspace processing being to slow: http://marc.info/?l=linux-kernel&m=141111652424818&w=2 jirka ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-03 6:50 ` Jiri Olsa @ 2014-10-03 8:47 ` Ingo Molnar 2014-10-03 14:25 ` Alexander Yarygin 0 siblings, 1 reply; 14+ messages in thread From: Ingo Molnar @ 2014-10-03 8:47 UTC (permalink / raw) To: Jiri Olsa Cc: Alexander Yarygin, linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian * Jiri Olsa <jolsa@redhat.com> wrote: > On Fri, Oct 03, 2014 at 06:34:21AM +0200, Ingo Molnar wrote: > > > > * Alexander Yarygin <yarygin@linux.vnet.ibm.com> wrote: > > > > > When processing events the session code has an ordered samples > > > queue which is used to time-sort events coming in across > > > multiple mmaps. At a later point in time samples on the queue > > > are flushed up to some timestamp at which point the event is > > > actually processed. > > > > > > When analyzing events live (ie., record/analysis path in the > > > same command) there is a race that leads to corrupted events > > > and parse errors which cause perf to terminate. The problem is > > > that when the event is placed in the ordered samples queue it > > > is only a reference to the event which is really sitting in the > > > mmap buffer. Even though the event is queued for later > > > processing the mmap tail pointer is updated which indicates to > > > the kernel that the event has been processed. The race is > > > flushing the event from the queue before it gets overwritten by > > > some other event. For commands trying to process events live > > > (versus just writing to a file) and processing a high rate of > > > events this leads to parse failures and perf terminates. > > > > > > Examples hitting this problem are 'perf kvm stat live', > > > especially with nested VMs which generate 100,000+ traces per > > > second, and a command processing scheduling events with a high > > > rate of context switching -- e.g., running 'perf bench sched > > > pipe'. > > > > > > This patch offers live commands an option to copy the event > > > when it is placed in the ordered samples queue. > > > > What's the performance effect of this - i.e. by how much does CPU > > use increase due to copying the events? > > > > Wouldn't it be faster to fix this problem by updating the mmap > > tail pointer only once the event has truly been consumed? > > Alexander mentioned he'd loose data, because of userspace > processing being to slow: > > http://marc.info/?l=linux-kernel&m=141111652424818&w=2 So copying helps by allocating an essentially larger buffer, to hold all unprocessed events that user-space is too slow to process? I guess it's a valid usecase. Thanks, Ingo ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-03 8:47 ` Ingo Molnar @ 2014-10-03 14:25 ` Alexander Yarygin 0 siblings, 0 replies; 14+ messages in thread From: Alexander Yarygin @ 2014-10-03 14:25 UTC (permalink / raw) To: Ingo Molnar Cc: Jiri Olsa, linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian, Alexander Yarygin Ingo Molnar <mingo@kernel.org> writes: > * Jiri Olsa <jolsa@redhat.com> wrote: > >> On Fri, Oct 03, 2014 at 06:34:21AM +0200, Ingo Molnar wrote: >> > >> > * Alexander Yarygin <yarygin@linux.vnet.ibm.com> wrote: >> > [..] >> > >> > What's the performance effect of this - i.e. by how much does CPU >> > use increase due to copying the events? >> > >> > Wouldn't it be faster to fix this problem by updating the mmap >> > tail pointer only once the event has truly been consumed? >> >> Alexander mentioned he'd loose data, because of userspace >> processing being to slow: >> >> http://marc.info/?l=linux-kernel&m=141111652424818&w=2 > > So copying helps by allocating an essentially larger buffer, to > hold all unprocessed events that user-space is too slow to > process? > > I guess it's a valid usecase. > > Thanks, > > Ingo Right. Also, it looks like the overhead here isn't a big deal: time needed for actual processing an event is significantly bigger and the additional memdup() doesn't change that much. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 16:38 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 2014-10-03 4:34 ` Ingo Molnar @ 2014-10-03 7:33 ` Jiri Olsa 1 sibling, 0 replies; 14+ messages in thread From: Jiri Olsa @ 2014-10-03 7:33 UTC (permalink / raw) To: Alexander Yarygin Cc: linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian On Thu, Oct 02, 2014 at 08:38:55PM +0400, Alexander Yarygin wrote: > When processing events the session code has an ordered samples queue which is > used to time-sort events coming in across multiple mmaps. At a later point in > time samples on the queue are flushed up to some timestamp at which point the > event is actually processed. > > When analyzing events live (ie., record/analysis path in the same command) > there is a race that leads to corrupted events and parse errors which cause > perf to terminate. The problem is that when the event is placed in the ordered > samples queue it is only a reference to the event which is really sitting in > the mmap buffer. Even though the event is queued for later processing the mmap > tail pointer is updated which indicates to the kernel that the event has been > processed. The race is flushing the event from the queue before it gets > overwritten by some other event. For commands trying to process events live > (versus just writing to a file) and processing a high rate of events this leads > to parse failures and perf terminates. > > Examples hitting this problem are 'perf kvm stat live', especially with nested > VMs which generate 100,000+ traces per second, and a command processing > scheduling events with a high rate of context switching -- e.g., running > 'perf bench sched pipe'. > > This patch offers live commands an option to copy the event when it is placed in > the ordered samples queue. > > Based on a patch from David Ahern <dsahern@gmail.com> > > Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com> > Cc: Arnaldo Carvalho de Melo <acme@kernel.org> > Cc: Christian Borntraeger <borntraeger@de.ibm.com> > Cc: Frederic Weisbecker <fweisbec@gmail.com> > Cc: Ingo Molnar <mingo@kernel.org> > Cc: Jiri Olsa <jolsa@redhat.com> > Cc: Mike Galbraith <efault@gmx.de> > Cc: Namhyung Kim <namhyung.kim@lge.com> > Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> > Cc: Stephane Eranian <eranian@google.com> > --- > tools/perf/util/ordered-events.c | 51 ++++++++++++++++++++++++++++++++++++---- > tools/perf/util/ordered-events.h | 10 +++++++- > tools/perf/util/session.c | 5 ++-- > 3 files changed, 58 insertions(+), 8 deletions(-) apart from extra whitespaces (below): Acked-by: Jiri Olsa <jolsa@kernel.org> thanks, jirka > > diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c > index 706ce1a..06d53ee 100644 > --- a/tools/perf/util/ordered-events.c > +++ b/tools/perf/util/ordered-events.c SNIP > + } > > pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n", > oe->cur_alloc_size, size, oe->max_alloc_size); > @@ -90,15 +127,19 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) > pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size); > } > > + new->event = new_event; > + > return new; ^^^ here > } > > struct ordered_event * > -ordered_events__new(struct ordered_events *oe, u64 timestamp) > +ordered_events__new(struct ordered_events *oe, u64 timestamp, > + union perf_event *event) > { > struct ordered_event *new; > > - new = alloc_event(oe); > + new = alloc_event(oe, event); > + > if (new) { ^^^ and here ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 2/2] perf kvm stat live: Enable events copying 2014-10-02 16:38 [PATCH v3 0/2] perf kvm stat live: Copy events Alexander Yarygin 2014-10-02 16:38 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin @ 2014-10-02 16:38 ` Alexander Yarygin 1 sibling, 0 replies; 14+ messages in thread From: Alexander Yarygin @ 2014-10-02 16:38 UTC (permalink / raw) To: linux-kernel Cc: Alexander Yarygin, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian Process of analyzing events caused by 2 functions: mmap_read() and finished_round(). During mmap_read(), perf receives events from shared memory, queues their pointers for further processing in finished_round() and notifies the kernel that the events have been processed. By the time when finished_round() is invoked, queued events can be overwritten by the kernel, so the finished_round() occurs on potentially corrupted memory. Since there is no place where the event can be safely consumed, let's copy events when queueing. Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> --- tools/perf/builtin-kvm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/perf/builtin-kvm.c b/tools/perf/builtin-kvm.c index f5d3ae4..80efbfa 100644 --- a/tools/perf/builtin-kvm.c +++ b/tools/perf/builtin-kvm.c @@ -1370,6 +1370,7 @@ static int kvm_events_live(struct perf_kvm_stat *kvm, } kvm->session->evlist = kvm->evlist; perf_session__set_id_hdr_size(kvm->session); + ordered_events__set_copy_on_queue(&kvm->session->ordered_events, true); machine__synthesize_threads(&kvm->session->machines.host, &kvm->opts.target, kvm->evlist->threads, false); err = kvm_live_open_events(kvm); -- 1.9.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v4 0/2] perf kvm stat live: Copy events @ 2014-10-03 14:40 Alexander Yarygin 2014-10-03 14:40 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 0 siblings, 1 reply; 14+ messages in thread From: Alexander Yarygin @ 2014-10-03 14:40 UTC (permalink / raw) To: linux-kernel Cc: Alexander Yarygin, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian Hello, This is a fix of 'perf kvm stat live' crash when it tries to parse events that have been already overwritten by the kernel. Patches - 1/2 adds an option to copy events when they are pushed to the samples queue. The patch is based on the patch by David Ahern (https://lkml.org/lkml/2013/9/6/388) - 2/2 enables the copying for perf kvm stat live. Changes in v4: - removed extra whitespaces :) Changes in v3: - move repetitive code into functions Changes in v2: - the option to copy events is now a part of ordered_events - use memdup() instead malloc()/memcpy() - events alocations are under the report.queue-size limit Alexander Yarygin (2): perf tools: Add option to copy events when queueing perf kvm stat live: Enable events copying tools/perf/builtin-kvm.c | 1 + tools/perf/util/ordered-events.c | 49 ++++++++++++++++++++++++++++++++++++---- tools/perf/util/ordered-events.h | 10 +++++++- tools/perf/util/session.c | 5 ++-- 4 files changed, 57 insertions(+), 8 deletions(-) -- 1.9.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-03 14:40 [PATCH v4 0/2] perf kvm stat live: Copy events Alexander Yarygin @ 2014-10-03 14:40 ` Alexander Yarygin 0 siblings, 0 replies; 14+ messages in thread From: Alexander Yarygin @ 2014-10-03 14:40 UTC (permalink / raw) To: linux-kernel Cc: Alexander Yarygin, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian When processing events the session code has an ordered samples queue which is used to time-sort events coming in across multiple mmaps. At a later point in time samples on the queue are flushed up to some timestamp at which point the event is actually processed. When analyzing events live (ie., record/analysis path in the same command) there is a race that leads to corrupted events and parse errors which cause perf to terminate. The problem is that when the event is placed in the ordered samples queue it is only a reference to the event which is really sitting in the mmap buffer. Even though the event is queued for later processing the mmap tail pointer is updated which indicates to the kernel that the event has been processed. The race is flushing the event from the queue before it gets overwritten by some other event. For commands trying to process events live (versus just writing to a file) and processing a high rate of events this leads to parse failures and perf terminates. Examples hitting this problem are 'perf kvm stat live', especially with nested VMs which generate 100,000+ traces per second, and a command processing scheduling events with a high rate of context switching -- e.g., running 'perf bench sched pipe'. This patch offers live commands an option to copy the event when it is placed in the ordered samples queue. Based on a patch from David Ahern <dsahern@gmail.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> --- tools/perf/util/ordered-events.c | 49 ++++++++++++++++++++++++++++++++++++---- tools/perf/util/ordered-events.h | 10 +++++++- tools/perf/util/session.c | 5 ++-- 3 files changed, 56 insertions(+), 8 deletions(-) diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c index 706ce1a..fd4be94 100644 --- a/tools/perf/util/ordered-events.c +++ b/tools/perf/util/ordered-events.c @@ -1,5 +1,6 @@ #include <linux/list.h> #include <linux/compiler.h> +#include <linux/string.h> #include "ordered-events.h" #include "evlist.h" #include "session.h" @@ -57,11 +58,45 @@ static void queue_event(struct ordered_events *oe, struct ordered_event *new) } } +static union perf_event *__dup_event(struct ordered_events *oe, + union perf_event *event) +{ + union perf_event *new_event = NULL; + + if (oe->cur_alloc_size < oe->max_alloc_size) { + new_event = memdup(event, event->header.size); + if (new_event) + oe->cur_alloc_size += event->header.size; + } + + return new_event; +} + +static union perf_event *dup_event(struct ordered_events *oe, + union perf_event *event) +{ + return oe->copy_on_queue ? __dup_event(oe, event) : event; +} + +static void free_dup_event(struct ordered_events *oe, union perf_event *event) +{ + if (oe->copy_on_queue) { + oe->cur_alloc_size -= event->header.size; + free(event); + } +} + #define MAX_SAMPLE_BUFFER (64 * 1024 / sizeof(struct ordered_event)) -static struct ordered_event *alloc_event(struct ordered_events *oe) +static struct ordered_event *alloc_event(struct ordered_events *oe, + union perf_event *event) { struct list_head *cache = &oe->cache; struct ordered_event *new = NULL; + union perf_event *new_event; + + new_event = dup_event(oe, event); + if (!new_event) + return NULL; if (!list_empty(cache)) { new = list_entry(cache->next, struct ordered_event, list); @@ -74,8 +109,10 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) size_t size = MAX_SAMPLE_BUFFER * sizeof(*new); oe->buffer = malloc(size); - if (!oe->buffer) + if (!oe->buffer) { + free_dup_event(oe, new_event); return NULL; + } pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n", oe->cur_alloc_size, size, oe->max_alloc_size); @@ -90,15 +127,17 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size); } + new->event = new_event; return new; } struct ordered_event * -ordered_events__new(struct ordered_events *oe, u64 timestamp) +ordered_events__new(struct ordered_events *oe, u64 timestamp, + union perf_event *event) { struct ordered_event *new; - new = alloc_event(oe); + new = alloc_event(oe, event); if (new) { new->timestamp = timestamp; queue_event(oe, new); @@ -111,6 +150,7 @@ void ordered_events__delete(struct ordered_events *oe, struct ordered_event *eve { list_move(&event->list, &oe->cache); oe->nr_events--; + free_dup_event(oe, event->event); } static int __ordered_events__flush(struct perf_session *s, @@ -240,6 +280,7 @@ void ordered_events__free(struct ordered_events *oe) event = list_entry(oe->to_free.next, struct ordered_event, list); list_del(&event->list); + free_dup_event(oe, event->event); free(event); } } diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h index 3b2f205..7b8f9b0 100644 --- a/tools/perf/util/ordered-events.h +++ b/tools/perf/util/ordered-events.h @@ -34,9 +34,11 @@ struct ordered_events { int buffer_idx; unsigned int nr_events; enum oe_flush last_flush_type; + bool copy_on_queue; }; -struct ordered_event *ordered_events__new(struct ordered_events *oe, u64 timestamp); +struct ordered_event *ordered_events__new(struct ordered_events *oe, u64 timestamp, + union perf_event *event); void ordered_events__delete(struct ordered_events *oe, struct ordered_event *event); int ordered_events__flush(struct perf_session *s, struct perf_tool *tool, enum oe_flush how); @@ -48,4 +50,10 @@ void ordered_events__set_alloc_size(struct ordered_events *oe, u64 size) { oe->max_alloc_size = size; } + +static inline +void ordered_events__set_copy_on_queue(struct ordered_events *oe, bool copy) +{ + oe->copy_on_queue = copy; +} #endif /* __ORDERED_EVENTS_H */ diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 6d2d50d..976064d 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -532,17 +532,16 @@ int perf_session_queue_event(struct perf_session *s, union perf_event *event, return -EINVAL; } - new = ordered_events__new(oe, timestamp); + new = ordered_events__new(oe, timestamp, event); if (!new) { ordered_events__flush(s, tool, OE_FLUSH__HALF); - new = ordered_events__new(oe, timestamp); + new = ordered_events__new(oe, timestamp, event); } if (!new) return -ENOMEM; new->file_offset = file_offset; - new->event = event; return 0; } -- 1.9.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 0/2] perf kvm stat live: Copy events @ 2014-10-02 10:32 Alexander Yarygin 2014-10-02 10:32 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 0 siblings, 1 reply; 14+ messages in thread From: Alexander Yarygin @ 2014-10-02 10:32 UTC (permalink / raw) To: linux-kernel Cc: Alexander Yarygin, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian Hello, This is a fix of 'perf kvm stat live' crash when it tries to parse events that have been already overwritten by the kernel. Patches - 1/2 adds an option to copy events when they are pushed to the samples queue. The patch is based on the patch by David Ahern (https://lkml.org/lkml/2013/9/6/388) - 2/2 enables the copying for perf kvm stat live. Changes in v2: - the option to copy events is now a part of ordered_events - use memdup() instead malloc()/memcpy() - events alocations are under the report.queue-size limit Previous thread: https://lkml.org/lkml/2014/9/18/353 Thanks! Alexander Yarygin (2): perf tools: Add option to copy events when queueing perf kvm stat live: Enable events copying tools/perf/builtin-kvm.c | 1 + tools/perf/util/ordered-events.c | 41 ++++++++++++++++++++++++++++++++++++---- tools/perf/util/ordered-events.h | 10 +++++++++- tools/perf/util/session.c | 5 ++--- 4 files changed, 49 insertions(+), 8 deletions(-) -- 1.9.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 10:32 [PATCH v2 0/2] perf kvm stat live: Copy events Alexander Yarygin @ 2014-10-02 10:32 ` Alexander Yarygin 2014-10-02 14:15 ` Jiri Olsa 2014-10-02 15:45 ` David Ahern 0 siblings, 2 replies; 14+ messages in thread From: Alexander Yarygin @ 2014-10-02 10:32 UTC (permalink / raw) To: linux-kernel Cc: Alexander Yarygin, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian From: David Ahern <dsahern@gmail.com> When processing events the session code has an ordered samples queue which is used to time-sort events coming in across multiple mmaps. At a later point in time samples on the queue are flushed up to some timestamp at which point the event is actually processed. When analyzing events live (ie., record/analysis path in the same command) there is a race that leads to corrupted events and parse errors which cause perf to terminate. The problem is that when the event is placed in the ordered samples queue it is only a reference to the event which is really sitting in the mmap buffer. Even though the event is queued for later processing the mmap tail pointer is updated which indicates to the kernel that the event has been processed. The race is flushing the event from the queue before it gets overwritten by some other event. For commands trying to process events live (versus just writing to a file) and processing a high rate of events this leads to parse failures and perf terminates. Examples hitting this problem are 'perf kvm stat live', especially with nested VMs which generate 100,000+ traces per second, and a command processing scheduling events with a high rate of context switching -- e.g., running 'perf bench sched pipe'. This patch offers live commands an option to copy the event when it is placed in the ordered samples queue. Signed-off-by: David Ahern <dsahern@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Namhyung Kim <namhyung.kim@lge.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com> --- tools/perf/util/ordered-events.c | 41 ++++++++++++++++++++++++++++++++++++---- tools/perf/util/ordered-events.h | 10 +++++++++- tools/perf/util/session.c | 5 ++--- 3 files changed, 48 insertions(+), 8 deletions(-) diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c index 706ce1a..f7383cc 100644 --- a/tools/perf/util/ordered-events.c +++ b/tools/perf/util/ordered-events.c @@ -1,5 +1,6 @@ #include <linux/list.h> #include <linux/compiler.h> +#include <linux/string.h> #include "ordered-events.h" #include "evlist.h" #include "session.h" @@ -58,10 +59,24 @@ static void queue_event(struct ordered_events *oe, struct ordered_event *new) } #define MAX_SAMPLE_BUFFER (64 * 1024 / sizeof(struct ordered_event)) -static struct ordered_event *alloc_event(struct ordered_events *oe) +static struct ordered_event *alloc_event(struct ordered_events *oe, + union perf_event *event) { struct list_head *cache = &oe->cache; struct ordered_event *new = NULL; + union perf_event *new_event = NULL; + + if (oe->copy_on_queue) { + if (oe->cur_alloc_size < oe->max_alloc_size) { + new_event = memdup(event, event->header.size); + if (new_event) + oe->cur_alloc_size += event->header.size; + } + } else + new_event = event; + + if (!new_event) + return NULL; if (!list_empty(cache)) { new = list_entry(cache->next, struct ordered_event, list); @@ -74,8 +89,13 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) size_t size = MAX_SAMPLE_BUFFER * sizeof(*new); oe->buffer = malloc(size); - if (!oe->buffer) + if (!oe->buffer) { + if (oe->copy_on_queue) { + oe->cur_alloc_size -= new_event->header.size; + free(new_event); + } return NULL; + } pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n", oe->cur_alloc_size, size, oe->max_alloc_size); @@ -90,15 +110,19 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size); } + new->event = new_event; + return new; } struct ordered_event * -ordered_events__new(struct ordered_events *oe, u64 timestamp) +ordered_events__new(struct ordered_events *oe, u64 timestamp, + union perf_event *event) { struct ordered_event *new; - new = alloc_event(oe); + new = alloc_event(oe, event); + if (new) { new->timestamp = timestamp; queue_event(oe, new); @@ -111,6 +135,10 @@ void ordered_events__delete(struct ordered_events *oe, struct ordered_event *eve { list_move(&event->list, &oe->cache); oe->nr_events--; + if (oe->copy_on_queue) { + oe->cur_alloc_size -= event->event->header.size; + free(event->event); + } } static int __ordered_events__flush(struct perf_session *s, @@ -240,6 +268,11 @@ void ordered_events__free(struct ordered_events *oe) event = list_entry(oe->to_free.next, struct ordered_event, list); list_del(&event->list); + if (oe->copy_on_queue) { + oe->cur_alloc_size -= event->event->header.size; + free(event->event); + } + free(event); } } diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h index 3b2f205..7b8f9b0 100644 --- a/tools/perf/util/ordered-events.h +++ b/tools/perf/util/ordered-events.h @@ -34,9 +34,11 @@ struct ordered_events { int buffer_idx; unsigned int nr_events; enum oe_flush last_flush_type; + bool copy_on_queue; }; -struct ordered_event *ordered_events__new(struct ordered_events *oe, u64 timestamp); +struct ordered_event *ordered_events__new(struct ordered_events *oe, u64 timestamp, + union perf_event *event); void ordered_events__delete(struct ordered_events *oe, struct ordered_event *event); int ordered_events__flush(struct perf_session *s, struct perf_tool *tool, enum oe_flush how); @@ -48,4 +50,10 @@ void ordered_events__set_alloc_size(struct ordered_events *oe, u64 size) { oe->max_alloc_size = size; } + +static inline +void ordered_events__set_copy_on_queue(struct ordered_events *oe, bool copy) +{ + oe->copy_on_queue = copy; +} #endif /* __ORDERED_EVENTS_H */ diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c index 6d2d50d..976064d 100644 --- a/tools/perf/util/session.c +++ b/tools/perf/util/session.c @@ -532,17 +532,16 @@ int perf_session_queue_event(struct perf_session *s, union perf_event *event, return -EINVAL; } - new = ordered_events__new(oe, timestamp); + new = ordered_events__new(oe, timestamp, event); if (!new) { ordered_events__flush(s, tool, OE_FLUSH__HALF); - new = ordered_events__new(oe, timestamp); + new = ordered_events__new(oe, timestamp, event); } if (!new) return -ENOMEM; new->file_offset = file_offset; - new->event = event; return 0; } -- 1.9.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 10:32 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin @ 2014-10-02 14:15 ` Jiri Olsa 2014-10-02 15:21 ` Alexander Yarygin 2014-10-02 15:45 ` David Ahern 1 sibling, 1 reply; 14+ messages in thread From: Jiri Olsa @ 2014-10-02 14:15 UTC (permalink / raw) To: Alexander Yarygin Cc: linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian On Thu, Oct 02, 2014 at 02:32:08PM +0400, Alexander Yarygin wrote: SNIP > + if (!oe->buffer) { > + if (oe->copy_on_queue) { > + oe->cur_alloc_size -= new_event->header.size; > + free(new_event); > + } > return NULL; > + } > > pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n", > oe->cur_alloc_size, size, oe->max_alloc_size); > @@ -90,15 +110,19 @@ static struct ordered_event *alloc_event(struct ordered_events *oe) > pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size); > } > > + new->event = new_event; > + > return new; > } > > struct ordered_event * > -ordered_events__new(struct ordered_events *oe, u64 timestamp) > +ordered_events__new(struct ordered_events *oe, u64 timestamp, > + union perf_event *event) > { > struct ordered_event *new; > > - new = alloc_event(oe); > + new = alloc_event(oe, event); > + > if (new) { > new->timestamp = timestamp; > queue_event(oe, new); > @@ -111,6 +135,10 @@ void ordered_events__delete(struct ordered_events *oe, struct ordered_event *eve > { > list_move(&event->list, &oe->cache); > oe->nr_events--; > + if (oe->copy_on_queue) { > + oe->cur_alloc_size -= event->event->header.size; > + free(event->event); > + } > } > > static int __ordered_events__flush(struct perf_session *s, > @@ -240,6 +268,11 @@ void ordered_events__free(struct ordered_events *oe) > > event = list_entry(oe->to_free.next, struct ordered_event, list); > list_del(&event->list); > + if (oe->copy_on_queue) { > + oe->cur_alloc_size -= event->event->header.size; > + free(event->event); > + } > + > free(event); looks ok.. but I was wondering if we could move those repeating bits in function.. something like below (untested, just compiled) thanks, jirka --- diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c index f7383ccc6690..583dcefc92fb 100644 --- a/tools/perf/util/ordered-events.c +++ b/tools/perf/util/ordered-events.c @@ -58,23 +58,41 @@ static void queue_event(struct ordered_events *oe, struct ordered_event *new) } } +static union perf_event *__dup_event(struct ordered_events *oe, union perf_event *event) +{ + union perf_event *new_event = NULL; + + if (oe->cur_alloc_size < oe->max_alloc_size) { + new_event = memdup(event, event->header.size); + if (new_event) + oe->cur_alloc_size += event->header.size; + } + + return new_event; +} + +static union perf_event *dup_event(struct ordered_events *oe, union perf_event *event) +{ + return oe->copy_on_queue ? __dup_event(oe, event) : event; +} + +static void free_dup_event(struct ordered_events *oe, union perf_event *event) +{ + if (oe->copy_on_queue) { + oe->cur_alloc_size -= event->header.size; + free(event); + } +} + #define MAX_SAMPLE_BUFFER (64 * 1024 / sizeof(struct ordered_event)) static struct ordered_event *alloc_event(struct ordered_events *oe, union perf_event *event) { struct list_head *cache = &oe->cache; struct ordered_event *new = NULL; - union perf_event *new_event = NULL; - - if (oe->copy_on_queue) { - if (oe->cur_alloc_size < oe->max_alloc_size) { - new_event = memdup(event, event->header.size); - if (new_event) - oe->cur_alloc_size += event->header.size; - } - } else - new_event = event; + union perf_event *new_event; + new_event = dup_event(oe, event); if (!new_event) return NULL; @@ -90,10 +108,7 @@ static struct ordered_event *alloc_event(struct ordered_events *oe, oe->buffer = malloc(size); if (!oe->buffer) { - if (oe->copy_on_queue) { - oe->cur_alloc_size -= new_event->header.size; - free(new_event); - } + free_dup_event(oe, new_event); return NULL; } @@ -135,10 +150,7 @@ void ordered_events__delete(struct ordered_events *oe, struct ordered_event *eve { list_move(&event->list, &oe->cache); oe->nr_events--; - if (oe->copy_on_queue) { - oe->cur_alloc_size -= event->event->header.size; - free(event->event); - } + free_dup_event(oe, event->event); } static int __ordered_events__flush(struct perf_session *s, @@ -268,11 +280,7 @@ void ordered_events__free(struct ordered_events *oe) event = list_entry(oe->to_free.next, struct ordered_event, list); list_del(&event->list); - if (oe->copy_on_queue) { - oe->cur_alloc_size -= event->event->header.size; - free(event->event); - } - + free_dup_event(oe, event->event); free(event); } } ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 14:15 ` Jiri Olsa @ 2014-10-02 15:21 ` Alexander Yarygin 2014-10-02 15:33 ` Jiri Olsa 0 siblings, 1 reply; 14+ messages in thread From: Alexander Yarygin @ 2014-10-02 15:21 UTC (permalink / raw) To: Jiri Olsa Cc: linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian, Alexander Yarygin Jiri Olsa <jolsa@redhat.com> writes: [..] > > looks ok.. but I was wondering if we could move those repeating > bits in function.. something like below (untested, just compiled) > > thanks, > jirka > > [..] Yep, it's better. Just checked - it works. How should we process? Thanks. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 15:21 ` Alexander Yarygin @ 2014-10-02 15:33 ` Jiri Olsa 0 siblings, 0 replies; 14+ messages in thread From: Jiri Olsa @ 2014-10-02 15:33 UTC (permalink / raw) To: Alexander Yarygin Cc: linux-kernel, Arnaldo Carvalho de Melo, Christian Borntraeger, David Ahern, Frederic Weisbecker, Ingo Molnar, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian On Thu, Oct 02, 2014 at 07:21:58PM +0400, Alexander Yarygin wrote: > Jiri Olsa <jolsa@redhat.com> writes: > > [..] > > > > > looks ok.. but I was wondering if we could move those repeating > > bits in function.. something like below (untested, just compiled) > > > > thanks, > > jirka > > > > > > [..] > > Yep, it's better. Just checked - it works. How should we process? just update your patch with that change and resend thanks, jirka ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 1/2] perf tools: Add option to copy events when queueing 2014-10-02 10:32 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 2014-10-02 14:15 ` Jiri Olsa @ 2014-10-02 15:45 ` David Ahern 1 sibling, 0 replies; 14+ messages in thread From: David Ahern @ 2014-10-02 15:45 UTC (permalink / raw) To: Alexander Yarygin, linux-kernel Cc: Arnaldo Carvalho de Melo, Christian Borntraeger, Frederic Weisbecker, Ingo Molnar, Jiri Olsa, Mike Galbraith, Namhyung Kim, Paul Mackerras, Peter Zijlstra, Stephane Eranian On 10/2/14, 4:32 AM, Alexander Yarygin wrote: > From: David Ahern <dsahern@gmail.com> At this point the patch is no longer from me. You have done the migration to ordered_events. You can change that to based on a patch from ... ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2014-10-03 14:42 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2014-10-02 16:38 [PATCH v3 0/2] perf kvm stat live: Copy events Alexander Yarygin 2014-10-02 16:38 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 2014-10-03 4:34 ` Ingo Molnar 2014-10-03 6:50 ` Jiri Olsa 2014-10-03 8:47 ` Ingo Molnar 2014-10-03 14:25 ` Alexander Yarygin 2014-10-03 7:33 ` Jiri Olsa 2014-10-02 16:38 ` [PATCH 2/2] perf kvm stat live: Enable events copying Alexander Yarygin -- strict thread matches above, loose matches on Subject: below -- 2014-10-03 14:40 [PATCH v4 0/2] perf kvm stat live: Copy events Alexander Yarygin 2014-10-03 14:40 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 2014-10-02 10:32 [PATCH v2 0/2] perf kvm stat live: Copy events Alexander Yarygin 2014-10-02 10:32 ` [PATCH 1/2] perf tools: Add option to copy events when queueing Alexander Yarygin 2014-10-02 14:15 ` Jiri Olsa 2014-10-02 15:21 ` Alexander Yarygin 2014-10-02 15:33 ` Jiri Olsa 2014-10-02 15:45 ` David Ahern
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.