LKML Archive on lore.kernel.org
 help / Atom feed
* [PATCH v2] perf ordered_events: fix crash in free_dup_event()
@ 2018-08-08 22:33 Stephane Eranian
  2018-08-09  8:07 ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Stephane Eranian @ 2018-08-08 22:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: acme, peterz, mingo, jolsa

This patch fixes a bug in ordered_event.c:alloc_event().
An ordered_event struct was not initialized properly potentially
causing crashes later on in free_dup_event() depending on the
content of the memory. If it was NULL, then it would work fine,
otherwise, it could cause crashes such as:

$ perf record -o - -e cycles date |  perf inject -b -i - >/dev/null
Tue Aug  7 12:03:48 PDT 2018
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.000 MB - ]
Segmentation fault (core dumped):

(gdb) r inject -b -i - < tt  >/dev/null
Program received signal SIGSEGV, Segmentation fault.
free_dup_event (oe=0x26a39a0, event=0xffffffff00000000) at util/ordered-events.c:85
(gdb) bt
#0  free_dup_event (oe=0x26a39a0, event=0xffffffff00000000) at util/ordered-events.c:85
#1  ordered_events__free (oe=0x26a39a0) at util/ordered-events.c:310
#2  0x00000000004b5a56 in __perf_session__process_pipe_events (session=<optimized out>) at util/session.c:1753
#3  perf_session__process_events (session=<optimized out>) at util/session.c:1932
#4  0x000000000043a2eb in __cmd_inject (inject=<optimized out>) at builtin-inject.c:750
#5  cmd_inject (argc=<optimized out>, argv=<optimized out>) at builtin-inject.c:924
#6  0x000000000046b175 in run_builtin (p=0xabc640 <commands+576>, argc=4, argv=0x7fffffffe560) at perf.c:297
#7  0x000000000046b062 in handle_internal_command (argc=4, argv=0x7fffffffe560) at perf.c:349
#8  0x000000000046a5e8 in run_argv (argcp=<optimized out>, argv=<optimized out>) at perf.c:393
#9  main (argc=4, argv=0x7fffffffe560) at perf.c:531

Signed-off-by: Stephane Eranian <eranian@google.com>

---
 tools/perf/util/ordered-events.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
index a90dbe5df019..95c91e5a3754 100644
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@ -118,6 +118,12 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 
 		pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n",
 		   oe->cur_alloc_size, size, oe->max_alloc_size);
+		/*
+		 * must initialize event pointer of commandeered first
+		 * entry to avoid crash in free_dup_event() due to random
+		 * value for this field.
+		 */
+		oe->buffer->event = NULL;
 
 		oe->cur_alloc_size += size;
 		list_add(&oe->buffer->list, &oe->to_free);
-- 
2.18.0.597.ga71716f1ad-goog

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] perf ordered_events: fix crash in free_dup_event()
  2018-08-08 22:33 [PATCH v2] perf ordered_events: fix crash in free_dup_event() Stephane Eranian
@ 2018-08-09  8:07 ` Jiri Olsa
  2018-08-10  8:21   ` Stephane Eranian
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2018-08-09  8:07 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: linux-kernel, acme, peterz, mingo

On Wed, Aug 08, 2018 at 03:33:20PM -0700, Stephane Eranian wrote:
> This patch fixes a bug in ordered_event.c:alloc_event().
> An ordered_event struct was not initialized properly potentially
> causing crashes later on in free_dup_event() depending on the
> content of the memory. If it was NULL, then it would work fine,
> otherwise, it could cause crashes such as:

I'm now little puzzled what do we use this first event for..
I can't see anything special about it, other than it's added
on the list uninitialized ;-)

it seems to work properly when we ditch it.. might be some
prehistoric leftover or I'm terribly missing something

thanks,
jirka


---
diff --cc tools/perf/util/ordered-events.c
index bad9e0296e9a,0e837b0b8582..000000000000
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@@ -119,12 -119,8 +119,9 @@@ static struct ordered_event *alloc_even
  		pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n",
  		   oe->cur_alloc_size, size, oe->max_alloc_size);
  
 +		oe->cur_alloc_size += size;
- 		list_add(&oe->buffer->list, &oe->to_free);
- 
- 		/* First entry is abused to maintain the to_free list. */
- 		oe->buffer_idx = 2;
- 		new = oe->buffer + 1;
+ 		oe->buffer_idx = 1;
+ 		new = oe->buffer;
  	} else {
  		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
  	}

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] perf ordered_events: fix crash in free_dup_event()
  2018-08-09  8:07 ` Jiri Olsa
@ 2018-08-10  8:21   ` Stephane Eranian
  2018-08-10 11:54     ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Stephane Eranian @ 2018-08-10  8:21 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo

On Thu, Aug 9, 2018 at 1:07 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Wed, Aug 08, 2018 at 03:33:20PM -0700, Stephane Eranian wrote:
> > This patch fixes a bug in ordered_event.c:alloc_event().
> > An ordered_event struct was not initialized properly potentially
> > causing crashes later on in free_dup_event() depending on the
> > content of the memory. If it was NULL, then it would work fine,
> > otherwise, it could cause crashes such as:
>
> I'm now little puzzled what do we use this first event for..
> I can't see anything special about it, other than it's added
> on the list uninitialized ;-)
>
> it seems to work properly when we ditch it.. might be some
> prehistoric leftover or I'm terribly missing something
>
You need to keep track of the buffers to free. You do not free the
ordered_event structs
individually. For each oe->buffer, you need one free(). Each buffer is
put in the to_free
list. But to link it into the list it needs a list_head. This is what
buffer[0] is used for.
But the logic is broken in  ordered_events__free(). It does not free individual
ordered_event structs, but a buffer with many. Yet, it is missing
freeing all of the duped
events.

 void ordered_events__free(struct ordered_events *oe)
{
        while (!list_empty(&oe->to_free)) {
                struct ordered_event *buffer;

                buffer = list_entry(oe->to_free.next, struct
ordered_event, list);
                list_del(&buffer->list);
---->         free_dup_event(oe, event->event);
                free(buffer);
        }
}
This only frees the dup_event of buffer[0] which we know is NULL (well, now).
It needs to walk all the entries in buffer[] to free buffer[x].event.

I think the goal was likely to avoid adding another list_head field to
each ordered_event
and instead use one per allocated buffer.
This is very convoluted and prone to errors and we are seeing right
now. This should
be cleaned. So either you add a list_head to ordered_event or you
would buffer[x] in
ordered_events_free().

At this point, this is my understanding.
Do you agree?


>
> thanks,
> jirka
>
>
> ---
> diff --cc tools/perf/util/ordered-events.c
> index bad9e0296e9a,0e837b0b8582..000000000000
> --- a/tools/perf/util/ordered-events.c
> +++ b/tools/perf/util/ordered-events.c
> @@@ -119,12 -119,8 +119,9 @@@ static struct ordered_event *alloc_even
>                 pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n",
>                    oe->cur_alloc_size, size, oe->max_alloc_size);
>
>  +              oe->cur_alloc_size += size;
> -               list_add(&oe->buffer->list, &oe->to_free);
> -
> -               /* First entry is abused to maintain the to_free list. */
> -               oe->buffer_idx = 2;
> -               new = oe->buffer + 1;
> +               oe->buffer_idx = 1;
> +               new = oe->buffer;
>         } else {
>                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
>         }

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2] perf ordered_events: fix crash in free_dup_event()
  2018-08-10  8:21   ` Stephane Eranian
@ 2018-08-10 11:54     ` Jiri Olsa
  2018-08-13 13:04       ` [PATCH] perf tools: Add struct ordered_events_buffer layer Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2018-08-10 11:54 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo

On Fri, Aug 10, 2018 at 01:21:18AM -0700, Stephane Eranian wrote:
> On Thu, Aug 9, 2018 at 1:07 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > On Wed, Aug 08, 2018 at 03:33:20PM -0700, Stephane Eranian wrote:
> > > This patch fixes a bug in ordered_event.c:alloc_event().
> > > An ordered_event struct was not initialized properly potentially
> > > causing crashes later on in free_dup_event() depending on the
> > > content of the memory. If it was NULL, then it would work fine,
> > > otherwise, it could cause crashes such as:
> >
> > I'm now little puzzled what do we use this first event for..
> > I can't see anything special about it, other than it's added
> > on the list uninitialized ;-)
> >
> > it seems to work properly when we ditch it.. might be some
> > prehistoric leftover or I'm terribly missing something
> >
> You need to keep track of the buffers to free. You do not free the
> ordered_event structs
> individually. For each oe->buffer, you need one free(). Each buffer is
> put in the to_free
> list. But to link it into the list it needs a list_head. This is what
> buffer[0] is used for.
> But the logic is broken in  ordered_events__free(). It does not free individual
> ordered_event structs, but a buffer with many. Yet, it is missing
> freeing all of the duped
> events.
> 
>  void ordered_events__free(struct ordered_events *oe)
> {
>         while (!list_empty(&oe->to_free)) {
>                 struct ordered_event *buffer;
> 
>                 buffer = list_entry(oe->to_free.next, struct
> ordered_event, list);
>                 list_del(&buffer->list);
> ---->         free_dup_event(oe, event->event);
>                 free(buffer);
>         }
> }
> This only frees the dup_event of buffer[0] which we know is NULL (well, now).
> It needs to walk all the entries in buffer[] to free buffer[x].event.

yes.. if there's copy_on_queue set, we need to do that,
otherwise we're leaking all the events

> 
> I think the goal was likely to avoid adding another list_head field to
> each ordered_event
> and instead use one per allocated buffer.
> This is very convoluted and prone to errors and we are seeing right
> now. This should
> be cleaned. So either you add a list_head to ordered_event or you
> would buffer[x] in
> ordered_events_free().
> 
> At this point, this is my understanding.
> Do you agree?

yea, I see it now.. thanks for pointing this out

how about something like below? haven't tested properly yet

jirka


---
diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
index bad9e0296e9a..5c0d85e90a18 100644
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
 	return oe->copy_on_queue ? __dup_event(oe, event) : event;
 }
 
-static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
 {
-	if (event && oe->copy_on_queue) {
+	if (event) {
 		oe->cur_alloc_size -= event->header.size;
 		free(event);
 	}
 }
 
+static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+{
+	if (oe->copy_on_queue)
+		__free_dup_event(oe, event);
+}
+
 #define MAX_SAMPLE_BUFFER	(64 * 1024 / sizeof(struct ordered_event))
 static struct ordered_event *alloc_event(struct ordered_events *oe,
 					 union perf_event *event)
@@ -104,11 +110,11 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		new = list_entry(cache->next, struct ordered_event, list);
 		list_del(&new->list);
 	} else if (oe->buffer) {
-		new = oe->buffer + oe->buffer_idx;
+		new = &oe->buffer->event[oe->buffer_idx];
 		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
 			oe->buffer = NULL;
 	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
-		size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
+		size_t size = sizeof(*oe->buffer) + MAX_SAMPLE_BUFFER * sizeof(*new);
 
 		oe->buffer = malloc(size);
 		if (!oe->buffer) {
@@ -122,9 +128,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		oe->cur_alloc_size += size;
 		list_add(&oe->buffer->list, &oe->to_free);
 
-		/* First entry is abused to maintain the to_free list. */
-		oe->buffer_idx = 2;
-		new = oe->buffer + 1;
+		oe->buffer_idx = 1;
+		new = &oe->buffer->event[0];
 	} else {
 		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
 	}
@@ -300,15 +305,27 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
 	oe->deliver	   = deliver;
 }
 
+static void
+ordered_events_buffer__free(struct ordered_events_buffer *buffer,
+			    struct ordered_events *oe)
+{
+	if (oe->copy_on_queue) {
+		unsigned int i;
+
+		for (i = 0; i < MAX_SAMPLE_BUFFER; i++)
+			__free_dup_event(oe, buffer->event[i].event);
+	}
+
+	free(buffer);
+}
+
 void ordered_events__free(struct ordered_events *oe)
 {
-	while (!list_empty(&oe->to_free)) {
-		struct ordered_event *event;
+	struct ordered_events_buffer *buffer, *tmp;
 
-		event = list_entry(oe->to_free.next, struct ordered_event, list);
-		list_del(&event->list);
-		free_dup_event(oe, event->event);
-		free(event);
+	list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
+		list_del(&buffer->list);
+		ordered_events_buffer__free(buffer, oe);
 	}
 }
 
diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
index 8c7a2948593e..1338d5c345dc 100644
--- a/tools/perf/util/ordered-events.h
+++ b/tools/perf/util/ordered-events.h
@@ -25,23 +25,28 @@ struct ordered_events;
 typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
 					 struct ordered_event *event);
 
+struct ordered_events_buffer {
+	struct list_head	list;
+	struct ordered_event	event[0];
+};
+
 struct ordered_events {
-	u64			last_flush;
-	u64			next_flush;
-	u64			max_timestamp;
-	u64			max_alloc_size;
-	u64			cur_alloc_size;
-	struct list_head	events;
-	struct list_head	cache;
-	struct list_head	to_free;
-	struct ordered_event	*buffer;
-	struct ordered_event	*last;
-	ordered_events__deliver_t deliver;
-	int			buffer_idx;
-	unsigned int		nr_events;
-	enum oe_flush		last_flush_type;
-	u32			nr_unordered_events;
-	bool                    copy_on_queue;
+	u64				 last_flush;
+	u64				 next_flush;
+	u64				 max_timestamp;
+	u64				 max_alloc_size;
+	u64				 cur_alloc_size;
+	struct list_head		 events;
+	struct list_head		 cache;
+	struct list_head		 to_free;
+	struct ordered_events_buffer	*buffer;
+	struct ordered_event		*last;
+	ordered_events__deliver_t	 deliver;
+	int				 buffer_idx;
+	unsigned int			 nr_events;
+	enum oe_flush			 last_flush_type;
+	u32				 nr_unordered_events;
+	bool				 copy_on_queue;
 };
 
 int ordered_events__queue(struct ordered_events *oe, union perf_event *event,

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH] perf tools: Add struct ordered_events_buffer layer
  2018-08-10 11:54     ` Jiri Olsa
@ 2018-08-13 13:04       ` Jiri Olsa
  2018-08-14  7:14         ` Stephane Eranian
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2018-08-13 13:04 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo

On Fri, Aug 10, 2018 at 01:54:31PM +0200, Jiri Olsa wrote:
> On Fri, Aug 10, 2018 at 01:21:18AM -0700, Stephane Eranian wrote:
> > On Thu, Aug 9, 2018 at 1:07 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > On Wed, Aug 08, 2018 at 03:33:20PM -0700, Stephane Eranian wrote:
> > > > This patch fixes a bug in ordered_event.c:alloc_event().
> > > > An ordered_event struct was not initialized properly potentially
> > > > causing crashes later on in free_dup_event() depending on the
> > > > content of the memory. If it was NULL, then it would work fine,
> > > > otherwise, it could cause crashes such as:
> > >
> > > I'm now little puzzled what do we use this first event for..
> > > I can't see anything special about it, other than it's added
> > > on the list uninitialized ;-)
> > >
> > > it seems to work properly when we ditch it.. might be some
> > > prehistoric leftover or I'm terribly missing something
> > >
> > You need to keep track of the buffers to free. You do not free the
> > ordered_event structs
> > individually. For each oe->buffer, you need one free(). Each buffer is
> > put in the to_free
> > list. But to link it into the list it needs a list_head. This is what
> > buffer[0] is used for.
> > But the logic is broken in  ordered_events__free(). It does not free individual
> > ordered_event structs, but a buffer with many. Yet, it is missing
> > freeing all of the duped
> > events.
> > 
> >  void ordered_events__free(struct ordered_events *oe)
> > {
> >         while (!list_empty(&oe->to_free)) {
> >                 struct ordered_event *buffer;
> > 
> >                 buffer = list_entry(oe->to_free.next, struct
> > ordered_event, list);
> >                 list_del(&buffer->list);
> > ---->         free_dup_event(oe, event->event);
> >                 free(buffer);
> >         }
> > }
> > This only frees the dup_event of buffer[0] which we know is NULL (well, now).
> > It needs to walk all the entries in buffer[] to free buffer[x].event.
> 
> yes.. if there's copy_on_queue set, we need to do that,
> otherwise we're leaking all the events
> 
> > 
> > I think the goal was likely to avoid adding another list_head field to
> > each ordered_event
> > and instead use one per allocated buffer.
> > This is very convoluted and prone to errors and we are seeing right
> > now. This should
> > be cleaned. So either you add a list_head to ordered_event or you
> > would buffer[x] in
> > ordered_events_free().
> > 
> > At this point, this is my understanding.
> > Do you agree?
> 
> yea, I see it now.. thanks for pointing this out
> 
> how about something like below? haven't tested properly yet

attaching full patch

thanks,
jirka


---
When ordering events, we use preallocated buffers to store separated
events. Those buffers currently don't have their own struct, but since
they are basically array of 'struct ordered_event' objects, we use the
first event to hold buffers data - list head, that holds all buffers
together:

   struct ordered_events {
     ...
     struct ordered_event *buffer;
     ...
   };

   struct ordered_event {
     u64               timestamp;
     u64               file_offset;
     union perf_event  *event;
     struct list_head  list;
   };

This is quite convoluted and error prone as demonstrated by
free-ing issue discovered and fixed by Stephane in here [1].

This patch adds the 'struct ordered_events_buffer' object,
that holds the buffer data and frees it up properly.

[1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2

Reported-by: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/perf/util/ordered-events.c | 44 ++++++++++++++++++++++----------
 tools/perf/util/ordered-events.h | 37 +++++++++++++++------------
 2 files changed, 52 insertions(+), 29 deletions(-)

diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
index bad9e0296e9a..038515a52e2c 100644
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
 	return oe->copy_on_queue ? __dup_event(oe, event) : event;
 }
 
-static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
 {
-	if (event && oe->copy_on_queue) {
+	if (event) {
 		oe->cur_alloc_size -= event->header.size;
 		free(event);
 	}
 }
 
+static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+{
+	if (oe->copy_on_queue)
+		__free_dup_event(oe, event);
+}
+
 #define MAX_SAMPLE_BUFFER	(64 * 1024 / sizeof(struct ordered_event))
 static struct ordered_event *alloc_event(struct ordered_events *oe,
 					 union perf_event *event)
@@ -104,11 +110,12 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		new = list_entry(cache->next, struct ordered_event, list);
 		list_del(&new->list);
 	} else if (oe->buffer) {
-		new = oe->buffer + oe->buffer_idx;
+		new = &oe->buffer->event[oe->buffer_idx];
 		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
 			oe->buffer = NULL;
 	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
-		size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
+		size_t size = sizeof(*oe->buffer) +
+			      MAX_SAMPLE_BUFFER * sizeof(*new);
 
 		oe->buffer = malloc(size);
 		if (!oe->buffer) {
@@ -122,9 +129,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		oe->cur_alloc_size += size;
 		list_add(&oe->buffer->list, &oe->to_free);
 
-		/* First entry is abused to maintain the to_free list. */
-		oe->buffer_idx = 2;
-		new = oe->buffer + 1;
+		oe->buffer_idx = 1;
+		new = &oe->buffer->event[0];
 	} else {
 		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
 	}
@@ -300,15 +306,27 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
 	oe->deliver	   = deliver;
 }
 
+static void
+ordered_events_buffer__free(struct ordered_events_buffer *buffer,
+			    struct ordered_events *oe)
+{
+	if (oe->copy_on_queue) {
+		unsigned int i;
+
+		for (i = 0; i < MAX_SAMPLE_BUFFER; i++)
+			__free_dup_event(oe, buffer->event[i].event);
+	}
+
+	free(buffer);
+}
+
 void ordered_events__free(struct ordered_events *oe)
 {
-	while (!list_empty(&oe->to_free)) {
-		struct ordered_event *event;
+	struct ordered_events_buffer *buffer, *tmp;
 
-		event = list_entry(oe->to_free.next, struct ordered_event, list);
-		list_del(&event->list);
-		free_dup_event(oe, event->event);
-		free(event);
+	list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
+		list_del(&buffer->list);
+		ordered_events_buffer__free(buffer, oe);
 	}
 }
 
diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
index 8c7a2948593e..1338d5c345dc 100644
--- a/tools/perf/util/ordered-events.h
+++ b/tools/perf/util/ordered-events.h
@@ -25,23 +25,28 @@ struct ordered_events;
 typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
 					 struct ordered_event *event);
 
+struct ordered_events_buffer {
+	struct list_head	list;
+	struct ordered_event	event[0];
+};
+
 struct ordered_events {
-	u64			last_flush;
-	u64			next_flush;
-	u64			max_timestamp;
-	u64			max_alloc_size;
-	u64			cur_alloc_size;
-	struct list_head	events;
-	struct list_head	cache;
-	struct list_head	to_free;
-	struct ordered_event	*buffer;
-	struct ordered_event	*last;
-	ordered_events__deliver_t deliver;
-	int			buffer_idx;
-	unsigned int		nr_events;
-	enum oe_flush		last_flush_type;
-	u32			nr_unordered_events;
-	bool                    copy_on_queue;
+	u64				 last_flush;
+	u64				 next_flush;
+	u64				 max_timestamp;
+	u64				 max_alloc_size;
+	u64				 cur_alloc_size;
+	struct list_head		 events;
+	struct list_head		 cache;
+	struct list_head		 to_free;
+	struct ordered_events_buffer	*buffer;
+	struct ordered_event		*last;
+	ordered_events__deliver_t	 deliver;
+	int				 buffer_idx;
+	unsigned int			 nr_events;
+	enum oe_flush			 last_flush_type;
+	u32				 nr_unordered_events;
+	bool				 copy_on_queue;
 };
 
 int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] perf tools: Add struct ordered_events_buffer layer
  2018-08-13 13:04       ` [PATCH] perf tools: Add struct ordered_events_buffer layer Jiri Olsa
@ 2018-08-14  7:14         ` Stephane Eranian
  2018-08-15  8:48           ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Stephane Eranian @ 2018-08-14  7:14 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo

Jiri,
On Mon, Aug 13, 2018 at 6:04 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Fri, Aug 10, 2018 at 01:54:31PM +0200, Jiri Olsa wrote:
> > On Fri, Aug 10, 2018 at 01:21:18AM -0700, Stephane Eranian wrote:
> > > On Thu, Aug 9, 2018 at 1:07 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > > >
> > > > On Wed, Aug 08, 2018 at 03:33:20PM -0700, Stephane Eranian wrote:
> > > > > This patch fixes a bug in ordered_event.c:alloc_event().
> > > > > An ordered_event struct was not initialized properly potentially
> > > > > causing crashes later on in free_dup_event() depending on the
> > > > > content of the memory. If it was NULL, then it would work fine,
> > > > > otherwise, it could cause crashes such as:
> > > >
> > > > I'm now little puzzled what do we use this first event for..
> > > > I can't see anything special about it, other than it's added
> > > > on the list uninitialized ;-)
> > > >
> > > > it seems to work properly when we ditch it.. might be some
> > > > prehistoric leftover or I'm terribly missing something
> > > >
> > > You need to keep track of the buffers to free. You do not free the
> > > ordered_event structs
> > > individually. For each oe->buffer, you need one free(). Each buffer is
> > > put in the to_free
> > > list. But to link it into the list it needs a list_head. This is what
> > > buffer[0] is used for.
> > > But the logic is broken in  ordered_events__free(). It does not free individual
> > > ordered_event structs, but a buffer with many. Yet, it is missing
> > > freeing all of the duped
> > > events.
> > >
> > >  void ordered_events__free(struct ordered_events *oe)
> > > {
> > >         while (!list_empty(&oe->to_free)) {
> > >                 struct ordered_event *buffer;
> > >
> > >                 buffer = list_entry(oe->to_free.next, struct
> > > ordered_event, list);
> > >                 list_del(&buffer->list);
> > > ---->         free_dup_event(oe, event->event);
> > >                 free(buffer);
> > >         }
> > > }
> > > This only frees the dup_event of buffer[0] which we know is NULL (well, now).
> > > It needs to walk all the entries in buffer[] to free buffer[x].event.
> >
> > yes.. if there's copy_on_queue set, we need to do that,
> > otherwise we're leaking all the events
> >
> > >
> > > I think the goal was likely to avoid adding another list_head field to
> > > each ordered_event
> > > and instead use one per allocated buffer.
> > > This is very convoluted and prone to errors and we are seeing right
> > > now. This should
> > > be cleaned. So either you add a list_head to ordered_event or you
> > > would buffer[x] in
> > > ordered_events_free().
> > >
> > > At this point, this is my understanding.
> > > Do you agree?
> >
> > yea, I see it now.. thanks for pointing this out
> >
> > how about something like below? haven't tested properly yet
>
> attaching full patch
>
> thanks,
> jirka
>
>
> ---
> When ordering events, we use preallocated buffers to store separated
> events. Those buffers currently don't have their own struct, but since
> they are basically array of 'struct ordered_event' objects, we use the
> first event to hold buffers data - list head, that holds all buffers
> together:
>
>    struct ordered_events {
>      ...
>      struct ordered_event *buffer;
>      ...
>    };
>
>    struct ordered_event {
>      u64               timestamp;
>      u64               file_offset;
>      union perf_event  *event;
>      struct list_head  list;
>    };
>
> This is quite convoluted and error prone as demonstrated by
> free-ing issue discovered and fixed by Stephane in here [1].
>
> This patch adds the 'struct ordered_events_buffer' object,
> that holds the buffer data and frees it up properly.
>
> [1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2
>
> Reported-by: Stephane Eranian <eranian@google.com>
> Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/perf/util/ordered-events.c | 44 ++++++++++++++++++++++----------
>  tools/perf/util/ordered-events.h | 37 +++++++++++++++------------
>  2 files changed, 52 insertions(+), 29 deletions(-)
>
> diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
> index bad9e0296e9a..038515a52e2c 100644
> --- a/tools/perf/util/ordered-events.c
> +++ b/tools/perf/util/ordered-events.c
> @@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
>         return oe->copy_on_queue ? __dup_event(oe, event) : event;
>  }
>
> -static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
>  {
> -       if (event && oe->copy_on_queue) {
> +       if (event) {
>                 oe->cur_alloc_size -= event->header.size;
>                 free(event);
>         }
>  }
>
> +static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +{
> +       if (oe->copy_on_queue)
> +               __free_dup_event(oe, event);
> +}
> +
>  #define MAX_SAMPLE_BUFFER      (64 * 1024 / sizeof(struct ordered_event))
>  static struct ordered_event *alloc_event(struct ordered_events *oe,
>                                          union perf_event *event)
> @@ -104,11 +110,12 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>                 new = list_entry(cache->next, struct ordered_event, list);
>                 list_del(&new->list);
>         } else if (oe->buffer) {
> -               new = oe->buffer + oe->buffer_idx;
> +               new = &oe->buffer->event[oe->buffer_idx];
>                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
>                         oe->buffer = NULL;
>         } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> -               size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> +               size_t size = sizeof(*oe->buffer) +
> +                             MAX_SAMPLE_BUFFER * sizeof(*new);
>
>                 oe->buffer = malloc(size);
>                 if (!oe->buffer) {
> @@ -122,9 +129,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>                 oe->cur_alloc_size += size;
>                 list_add(&oe->buffer->list, &oe->to_free);
>
> -               /* First entry is abused to maintain the to_free list. */
> -               oe->buffer_idx = 2;
> -               new = oe->buffer + 1;
> +               oe->buffer_idx = 1;
> +               new = &oe->buffer->event[0];

Ok, but I think this section between the malloc() and the line above
needs some comments to clarify what is going on.
It is still hard to read.

>         } else {
>                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
>         }
> @@ -300,15 +306,27 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
>         oe->deliver        = deliver;
>  }
>
> +static void
> +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> +                           struct ordered_events *oe)
> +{
> +       if (oe->copy_on_queue) {
> +               unsigned int i;
> +
> +               for (i = 0; i < MAX_SAMPLE_BUFFER; i++)
> +                       __free_dup_event(oe, buffer->event[i].event);
> +       }
> +
I have a problem with this one, given that the buffer->event[] is
never actually zeroed.
So what happens if you do not use all the entries by the time you have to free?
I think one way to avoid this is by iterating only all the way to
oe->buffer_idx.

> +       free(buffer);
> +}
> +
>  void ordered_events__free(struct ordered_events *oe)
>  {
> -       while (!list_empty(&oe->to_free)) {
> -               struct ordered_event *event;
> +       struct ordered_events_buffer *buffer, *tmp;
>
> -               event = list_entry(oe->to_free.next, struct ordered_event, list);
> -               list_del(&event->list);
> -               free_dup_event(oe, event->event);
> -               free(event);
> +       list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> +               list_del(&buffer->list);
> +               ordered_events_buffer__free(buffer, oe);
>         }
>  }
>
> diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
> index 8c7a2948593e..1338d5c345dc 100644
> --- a/tools/perf/util/ordered-events.h
> +++ b/tools/perf/util/ordered-events.h
> @@ -25,23 +25,28 @@ struct ordered_events;
>  typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
>                                          struct ordered_event *event);
>
> +struct ordered_events_buffer {
> +       struct list_head        list;
> +       struct ordered_event    event[0];
> +};
> +
>  struct ordered_events {
> -       u64                     last_flush;
> -       u64                     next_flush;
> -       u64                     max_timestamp;
> -       u64                     max_alloc_size;
> -       u64                     cur_alloc_size;
> -       struct list_head        events;
> -       struct list_head        cache;
> -       struct list_head        to_free;
> -       struct ordered_event    *buffer;
> -       struct ordered_event    *last;
> -       ordered_events__deliver_t deliver;
> -       int                     buffer_idx;
> -       unsigned int            nr_events;
> -       enum oe_flush           last_flush_type;
> -       u32                     nr_unordered_events;
> -       bool                    copy_on_queue;
> +       u64                              last_flush;
> +       u64                              next_flush;
> +       u64                              max_timestamp;
> +       u64                              max_alloc_size;
> +       u64                              cur_alloc_size;
> +       struct list_head                 events;
> +       struct list_head                 cache;
> +       struct list_head                 to_free;
> +       struct ordered_events_buffer    *buffer;
> +       struct ordered_event            *last;
> +       ordered_events__deliver_t        deliver;
> +       int                              buffer_idx;
> +       unsigned int                     nr_events;
> +       enum oe_flush                    last_flush_type;
> +       u32                              nr_unordered_events;
> +       bool                             copy_on_queue;
>  };
>
>  int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] perf tools: Add struct ordered_events_buffer layer
  2018-08-14  7:14         ` Stephane Eranian
@ 2018-08-15  8:48           ` Jiri Olsa
  2018-08-27  9:28             ` [PATCHv2] " Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2018-08-15  8:48 UTC (permalink / raw)
  To: Stephane Eranian; +Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo

On Tue, Aug 14, 2018 at 12:14:19AM -0700, Stephane Eranian wrote:

SNIP

> > @@ -104,11 +110,12 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> >                 new = list_entry(cache->next, struct ordered_event, list);
> >                 list_del(&new->list);
> >         } else if (oe->buffer) {
> > -               new = oe->buffer + oe->buffer_idx;
> > +               new = &oe->buffer->event[oe->buffer_idx];
> >                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
> >                         oe->buffer = NULL;
> >         } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> > -               size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> > +               size_t size = sizeof(*oe->buffer) +
> > +                             MAX_SAMPLE_BUFFER * sizeof(*new);
> >
> >                 oe->buffer = malloc(size);
> >                 if (!oe->buffer) {
> > @@ -122,9 +129,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> >                 oe->cur_alloc_size += size;
> >                 list_add(&oe->buffer->list, &oe->to_free);
> >
> > -               /* First entry is abused to maintain the to_free list. */
> > -               oe->buffer_idx = 2;
> > -               new = oe->buffer + 1;
> > +               oe->buffer_idx = 1;
> > +               new = &oe->buffer->event[0];
> 
> Ok, but I think this section between the malloc() and the line above
> needs some comments to clarify what is going on.
> It is still hard to read.

ok, I put some bigger comment at the top, but I'm not too happy
feel free to suggest different one ;-)

> 
> >         } else {
> >                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> >         }
> > @@ -300,15 +306,27 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
> >         oe->deliver        = deliver;
> >  }
> >
> > +static void
> > +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> > +                           struct ordered_events *oe)
> > +{
> > +       if (oe->copy_on_queue) {
> > +               unsigned int i;
> > +
> > +               for (i = 0; i < MAX_SAMPLE_BUFFER; i++)
> > +                       __free_dup_event(oe, buffer->event[i].event);
> > +       }
> > +
> I have a problem with this one, given that the buffer->event[] is
> never actually zeroed.
> So what happens if you do not use all the entries by the time you have to free?
> I think one way to avoid this is by iterating only all the way to
> oe->buffer_idx.

right, please check attached patch

thanks,
jirka


---
diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
index bad9e0296e9a..7a9707301f17 100644
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
 	return oe->copy_on_queue ? __dup_event(oe, event) : event;
 }
 
-static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
 {
-	if (event && oe->copy_on_queue) {
+	if (event) {
 		oe->cur_alloc_size -= event->header.size;
 		free(event);
 	}
 }
 
+static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+{
+	if (oe->copy_on_queue)
+		__free_dup_event(oe, event);
+}
+
 #define MAX_SAMPLE_BUFFER	(64 * 1024 / sizeof(struct ordered_event))
 static struct ordered_event *alloc_event(struct ordered_events *oe,
 					 union perf_event *event)
@@ -100,15 +106,43 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 	if (!new_event)
 		return NULL;
 
+	/*
+         * We maintain following scheme of buffers for ordered
+         * event allocation:
+         *
+         *   to_free list -> buffer1 (64K)
+         *                   buffer2 (64K)
+         *                   ...
+         *
+         * Each buffer keeps an array of ordered events objects:
+         *    buffer -> event[0]
+         *              event[1]
+         *              ...
+         *
+         * Each allocated ordered event is linked to one of
+         * following lists:
+         *   - time ordered list 'events'
+         *   - list of currently removed events 'cache'
+         *
+         * Allocation of the ordered event uses following order
+         * to get the memory:
+         *   - use recently removed object from 'cache' list
+         *   - use available object in current allocation buffer
+         *   - allocate new buffer if the current buffer is full
+         *
+         * Removal of ordered event object moves it from events to
+         * the cache list.
+	 */
 	if (!list_empty(cache)) {
 		new = list_entry(cache->next, struct ordered_event, list);
 		list_del(&new->list);
 	} else if (oe->buffer) {
-		new = oe->buffer + oe->buffer_idx;
+		new = &oe->buffer->event[oe->buffer_idx];
 		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
 			oe->buffer = NULL;
 	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
-		size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
+		size_t size = sizeof(*oe->buffer) +
+			      MAX_SAMPLE_BUFFER * sizeof(*new);
 
 		oe->buffer = malloc(size);
 		if (!oe->buffer) {
@@ -122,9 +156,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		oe->cur_alloc_size += size;
 		list_add(&oe->buffer->list, &oe->to_free);
 
-		/* First entry is abused to maintain the to_free list. */
-		oe->buffer_idx = 2;
-		new = oe->buffer + 1;
+		oe->buffer_idx = 1;
+		new = &oe->buffer->event[0];
 	} else {
 		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
 	}
@@ -300,15 +333,35 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
 	oe->deliver	   = deliver;
 }
 
+static void
+ordered_events_buffer__free(struct ordered_events_buffer *buffer,
+			    unsigned int max, struct ordered_events *oe)
+{
+	if (oe->copy_on_queue) {
+		unsigned int i;
+
+		for (i = 0; i < max; i++)
+			__free_dup_event(oe, buffer->event[i].event);
+	}
+
+	free(buffer);
+}
+
 void ordered_events__free(struct ordered_events *oe)
 {
-	while (!list_empty(&oe->to_free)) {
-		struct ordered_event *event;
+	struct ordered_events_buffer *buffer, *tmp;
 
-		event = list_entry(oe->to_free.next, struct ordered_event, list);
-		list_del(&event->list);
-		free_dup_event(oe, event->event);
-		free(event);
+	/*
+	 * Current buffer might not have all the events allocated
+	 * yet, we need to free only allocated ones ...
+	 */
+	list_del(&oe->buffer->list);
+	ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
+
+	/* ... and continue with the rest */
+	list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
+		list_del(&buffer->list);
+		ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
 	}
 }
 
diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
index 8c7a2948593e..1338d5c345dc 100644
--- a/tools/perf/util/ordered-events.h
+++ b/tools/perf/util/ordered-events.h
@@ -25,23 +25,28 @@ struct ordered_events;
 typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
 					 struct ordered_event *event);
 
+struct ordered_events_buffer {
+	struct list_head	list;
+	struct ordered_event	event[0];
+};
+
 struct ordered_events {
-	u64			last_flush;
-	u64			next_flush;
-	u64			max_timestamp;
-	u64			max_alloc_size;
-	u64			cur_alloc_size;
-	struct list_head	events;
-	struct list_head	cache;
-	struct list_head	to_free;
-	struct ordered_event	*buffer;
-	struct ordered_event	*last;
-	ordered_events__deliver_t deliver;
-	int			buffer_idx;
-	unsigned int		nr_events;
-	enum oe_flush		last_flush_type;
-	u32			nr_unordered_events;
-	bool                    copy_on_queue;
+	u64				 last_flush;
+	u64				 next_flush;
+	u64				 max_timestamp;
+	u64				 max_alloc_size;
+	u64				 cur_alloc_size;
+	struct list_head		 events;
+	struct list_head		 cache;
+	struct list_head		 to_free;
+	struct ordered_events_buffer	*buffer;
+	struct ordered_event		*last;
+	ordered_events__deliver_t	 deliver;
+	int				 buffer_idx;
+	unsigned int			 nr_events;
+	enum oe_flush			 last_flush_type;
+	u32				 nr_unordered_events;
+	bool				 copy_on_queue;
 };
 
 int ordered_events__queue(struct ordered_events *oe, union perf_event *event,

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCHv2] perf tools: Add struct ordered_events_buffer layer
  2018-08-15  8:48           ` Jiri Olsa
@ 2018-08-27  9:28             ` " Jiri Olsa
  2018-08-27 11:07               ` Namhyung Kim
  2018-08-27 15:24               ` Stephane Eranian
  0 siblings, 2 replies; 15+ messages in thread
From: Jiri Olsa @ 2018-08-27  9:28 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo, Namhyung Kim

On Wed, Aug 15, 2018 at 10:48:25AM +0200, Jiri Olsa wrote:
> On Tue, Aug 14, 2018 at 12:14:19AM -0700, Stephane Eranian wrote:
> 
> SNIP
> 
> > > @@ -104,11 +110,12 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> > >                 new = list_entry(cache->next, struct ordered_event, list);
> > >                 list_del(&new->list);
> > >         } else if (oe->buffer) {
> > > -               new = oe->buffer + oe->buffer_idx;
> > > +               new = &oe->buffer->event[oe->buffer_idx];
> > >                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
> > >                         oe->buffer = NULL;
> > >         } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> > > -               size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> > > +               size_t size = sizeof(*oe->buffer) +
> > > +                             MAX_SAMPLE_BUFFER * sizeof(*new);
> > >
> > >                 oe->buffer = malloc(size);
> > >                 if (!oe->buffer) {
> > > @@ -122,9 +129,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> > >                 oe->cur_alloc_size += size;
> > >                 list_add(&oe->buffer->list, &oe->to_free);
> > >
> > > -               /* First entry is abused to maintain the to_free list. */
> > > -               oe->buffer_idx = 2;
> > > -               new = oe->buffer + 1;
> > > +               oe->buffer_idx = 1;
> > > +               new = &oe->buffer->event[0];
> > 
> > Ok, but I think this section between the malloc() and the line above
> > needs some comments to clarify what is going on.
> > It is still hard to read.
> 
> ok, I put some bigger comment at the top, but I'm not too happy
> feel free to suggest different one ;-)
> 
> > 
> > >         } else {
> > >                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> > >         }
> > > @@ -300,15 +306,27 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
> > >         oe->deliver        = deliver;
> > >  }
> > >
> > > +static void
> > > +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> > > +                           struct ordered_events *oe)
> > > +{
> > > +       if (oe->copy_on_queue) {
> > > +               unsigned int i;
> > > +
> > > +               for (i = 0; i < MAX_SAMPLE_BUFFER; i++)
> > > +                       __free_dup_event(oe, buffer->event[i].event);
> > > +       }
> > > +
> > I have a problem with this one, given that the buffer->event[] is
> > never actually zeroed.
> > So what happens if you do not use all the entries by the time you have to free?
> > I think one way to avoid this is by iterating only all the way to
> > oe->buffer_idx.
> 
> right, please check attached patch

any comments? attaching v2

thanks,
jirka


---
When ordering events, we use preallocated buffers to store separated
events. Those buffers currently don't have their own struct, but since
they are basically array of 'struct ordered_event' objects, we use the
first event to hold buffers data - list head, that holds all buffers
together:

   struct ordered_events {
     ...
     struct ordered_event *buffer;
     ...
   };

   struct ordered_event {
     u64               timestamp;
     u64               file_offset;
     union perf_event  *event;
     struct list_head  list;
   };

This is quite convoluted and error prone as demonstrated by
free-ing issue discovered and fixed by Stephane in here [1].

This patch adds the 'struct ordered_events_buffer' object,
that holds the buffer data and frees it up properly.

[1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2

Reported-by: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/perf/util/ordered-events.c | 82 +++++++++++++++++++++++++++-----
 tools/perf/util/ordered-events.h | 37 +++++++-------
 2 files changed, 90 insertions(+), 29 deletions(-)

diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
index bad9e0296e9a..3672060508a7 100644
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
 	return oe->copy_on_queue ? __dup_event(oe, event) : event;
 }
 
-static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
 {
-	if (event && oe->copy_on_queue) {
+	if (event) {
 		oe->cur_alloc_size -= event->header.size;
 		free(event);
 	}
 }
 
+static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+{
+	if (oe->copy_on_queue)
+		__free_dup_event(oe, event);
+}
+
 #define MAX_SAMPLE_BUFFER	(64 * 1024 / sizeof(struct ordered_event))
 static struct ordered_event *alloc_event(struct ordered_events *oe,
 					 union perf_event *event)
@@ -100,15 +106,43 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 	if (!new_event)
 		return NULL;
 
+	/*
+	 * We maintain following scheme of buffers for ordered
+	 * event allocation:
+	 *
+	 *   to_free list -> buffer1 (64K)
+	 *                   buffer2 (64K)
+	 *                   ...
+	 *
+	 * Each buffer keeps an array of ordered events objects:
+	 *    buffer -> event[0]
+	 *              event[1]
+	 *              ...
+	 *
+	 * Each allocated ordered event is linked to one of
+	 * following lists:
+	 *   - time ordered list 'events'
+	 *   - list of currently removed events 'cache'
+	 *
+	 * Allocation of the ordered event uses following order
+	 * to get the memory:
+	 *   - use recently removed object from 'cache' list
+	 *   - use available object in current allocation buffer
+	 *   - allocate new buffer if the current buffer is full
+	 *
+	 * Removal of ordered event object moves it from events to
+	 * the cache list.
+	 */
 	if (!list_empty(cache)) {
 		new = list_entry(cache->next, struct ordered_event, list);
 		list_del(&new->list);
 	} else if (oe->buffer) {
-		new = oe->buffer + oe->buffer_idx;
+		new = &oe->buffer->event[oe->buffer_idx];
 		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
 			oe->buffer = NULL;
 	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
-		size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
+		size_t size = sizeof(*oe->buffer) +
+			      MAX_SAMPLE_BUFFER * sizeof(*new);
 
 		oe->buffer = malloc(size);
 		if (!oe->buffer) {
@@ -122,9 +156,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		oe->cur_alloc_size += size;
 		list_add(&oe->buffer->list, &oe->to_free);
 
-		/* First entry is abused to maintain the to_free list. */
-		oe->buffer_idx = 2;
-		new = oe->buffer + 1;
+		oe->buffer_idx = 1;
+		new = &oe->buffer->event[0];
 	} else {
 		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
 	}
@@ -300,15 +333,38 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
 	oe->deliver	   = deliver;
 }
 
+static void
+ordered_events_buffer__free(struct ordered_events_buffer *buffer,
+			    unsigned int max, struct ordered_events *oe)
+{
+	if (oe->copy_on_queue) {
+		unsigned int i;
+
+		for (i = 0; i < max; i++)
+			__free_dup_event(oe, buffer->event[i].event);
+	}
+
+	free(buffer);
+}
+
 void ordered_events__free(struct ordered_events *oe)
 {
-	while (!list_empty(&oe->to_free)) {
-		struct ordered_event *event;
+	struct ordered_events_buffer *buffer, *tmp;
 
-		event = list_entry(oe->to_free.next, struct ordered_event, list);
-		list_del(&event->list);
-		free_dup_event(oe, event->event);
-		free(event);
+	if (list_empty(&oe->to_free))
+		return;
+
+	/*
+	 * Current buffer might not have all the events allocated
+	 * yet, we need to free only allocated ones ...
+	 */
+	list_del(&oe->buffer->list);
+	ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
+
+	/* ... and continue with the rest */
+	list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
+		list_del(&buffer->list);
+		ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
 	}
 }
 
diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
index 8c7a2948593e..1338d5c345dc 100644
--- a/tools/perf/util/ordered-events.h
+++ b/tools/perf/util/ordered-events.h
@@ -25,23 +25,28 @@ struct ordered_events;
 typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
 					 struct ordered_event *event);
 
+struct ordered_events_buffer {
+	struct list_head	list;
+	struct ordered_event	event[0];
+};
+
 struct ordered_events {
-	u64			last_flush;
-	u64			next_flush;
-	u64			max_timestamp;
-	u64			max_alloc_size;
-	u64			cur_alloc_size;
-	struct list_head	events;
-	struct list_head	cache;
-	struct list_head	to_free;
-	struct ordered_event	*buffer;
-	struct ordered_event	*last;
-	ordered_events__deliver_t deliver;
-	int			buffer_idx;
-	unsigned int		nr_events;
-	enum oe_flush		last_flush_type;
-	u32			nr_unordered_events;
-	bool                    copy_on_queue;
+	u64				 last_flush;
+	u64				 next_flush;
+	u64				 max_timestamp;
+	u64				 max_alloc_size;
+	u64				 cur_alloc_size;
+	struct list_head		 events;
+	struct list_head		 cache;
+	struct list_head		 to_free;
+	struct ordered_events_buffer	*buffer;
+	struct ordered_event		*last;
+	ordered_events__deliver_t	 deliver;
+	int				 buffer_idx;
+	unsigned int			 nr_events;
+	enum oe_flush			 last_flush_type;
+	u32				 nr_unordered_events;
+	bool				 copy_on_queue;
 };
 
 int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCHv2] perf tools: Add struct ordered_events_buffer layer
  2018-08-27  9:28             ` [PATCHv2] " Jiri Olsa
@ 2018-08-27 11:07               ` Namhyung Kim
  2018-08-27 15:24               ` Stephane Eranian
  1 sibling, 0 replies; 15+ messages in thread
From: Namhyung Kim @ 2018-08-27 11:07 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Stephane Eranian, LKML, Arnaldo Carvalho de Melo, Peter Zijlstra,
	mingo, kernel-team

On Mon, Aug 27, 2018 at 11:28:18AM +0200, Jiri Olsa wrote:

> When ordering events, we use preallocated buffers to store separated
> events. Those buffers currently don't have their own struct, but since
> they are basically array of 'struct ordered_event' objects, we use the
> first event to hold buffers data - list head, that holds all buffers
> together:
> 
>    struct ordered_events {
>      ...
>      struct ordered_event *buffer;
>      ...
>    };
> 
>    struct ordered_event {
>      u64               timestamp;
>      u64               file_offset;
>      union perf_event  *event;
>      struct list_head  list;
>    };
> 
> This is quite convoluted and error prone as demonstrated by
> free-ing issue discovered and fixed by Stephane in here [1].
> 
> This patch adds the 'struct ordered_events_buffer' object,
> that holds the buffer data and frees it up properly.
> 
> [1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2
> 
> Reported-by: Stephane Eranian <eranian@google.com>
> Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>

Acked-by: Namhyung Kim <namhyung@kernel.org>

Thanks,
Namhyung


> ---
>  tools/perf/util/ordered-events.c | 82 +++++++++++++++++++++++++++-----
>  tools/perf/util/ordered-events.h | 37 +++++++-------
>  2 files changed, 90 insertions(+), 29 deletions(-)
> 
> diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
> index bad9e0296e9a..3672060508a7 100644
> --- a/tools/perf/util/ordered-events.c
> +++ b/tools/perf/util/ordered-events.c
> @@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
>  	return oe->copy_on_queue ? __dup_event(oe, event) : event;
>  }
>  
> -static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
>  {
> -	if (event && oe->copy_on_queue) {
> +	if (event) {
>  		oe->cur_alloc_size -= event->header.size;
>  		free(event);
>  	}
>  }
>  
> +static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +{
> +	if (oe->copy_on_queue)
> +		__free_dup_event(oe, event);
> +}
> +
>  #define MAX_SAMPLE_BUFFER	(64 * 1024 / sizeof(struct ordered_event))
>  static struct ordered_event *alloc_event(struct ordered_events *oe,
>  					 union perf_event *event)
> @@ -100,15 +106,43 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>  	if (!new_event)
>  		return NULL;
>  
> +	/*
> +	 * We maintain following scheme of buffers for ordered
> +	 * event allocation:
> +	 *
> +	 *   to_free list -> buffer1 (64K)
> +	 *                   buffer2 (64K)
> +	 *                   ...
> +	 *
> +	 * Each buffer keeps an array of ordered events objects:
> +	 *    buffer -> event[0]
> +	 *              event[1]
> +	 *              ...
> +	 *
> +	 * Each allocated ordered event is linked to one of
> +	 * following lists:
> +	 *   - time ordered list 'events'
> +	 *   - list of currently removed events 'cache'
> +	 *
> +	 * Allocation of the ordered event uses following order
> +	 * to get the memory:
> +	 *   - use recently removed object from 'cache' list
> +	 *   - use available object in current allocation buffer
> +	 *   - allocate new buffer if the current buffer is full
> +	 *
> +	 * Removal of ordered event object moves it from events to
> +	 * the cache list.
> +	 */
>  	if (!list_empty(cache)) {
>  		new = list_entry(cache->next, struct ordered_event, list);
>  		list_del(&new->list);
>  	} else if (oe->buffer) {
> -		new = oe->buffer + oe->buffer_idx;
> +		new = &oe->buffer->event[oe->buffer_idx];
>  		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
>  			oe->buffer = NULL;
>  	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
> -		size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> +		size_t size = sizeof(*oe->buffer) +
> +			      MAX_SAMPLE_BUFFER * sizeof(*new);
>  
>  		oe->buffer = malloc(size);
>  		if (!oe->buffer) {
> @@ -122,9 +156,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>  		oe->cur_alloc_size += size;
>  		list_add(&oe->buffer->list, &oe->to_free);
>  
> -		/* First entry is abused to maintain the to_free list. */
> -		oe->buffer_idx = 2;
> -		new = oe->buffer + 1;
> +		oe->buffer_idx = 1;
> +		new = &oe->buffer->event[0];
>  	} else {
>  		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
>  	}
> @@ -300,15 +333,38 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
>  	oe->deliver	   = deliver;
>  }
>  
> +static void
> +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> +			    unsigned int max, struct ordered_events *oe)
> +{
> +	if (oe->copy_on_queue) {
> +		unsigned int i;
> +
> +		for (i = 0; i < max; i++)
> +			__free_dup_event(oe, buffer->event[i].event);
> +	}
> +
> +	free(buffer);
> +}
> +
>  void ordered_events__free(struct ordered_events *oe)
>  {
> -	while (!list_empty(&oe->to_free)) {
> -		struct ordered_event *event;
> +	struct ordered_events_buffer *buffer, *tmp;
>  
> -		event = list_entry(oe->to_free.next, struct ordered_event, list);
> -		list_del(&event->list);
> -		free_dup_event(oe, event->event);
> -		free(event);
> +	if (list_empty(&oe->to_free))
> +		return;
> +
> +	/*
> +	 * Current buffer might not have all the events allocated
> +	 * yet, we need to free only allocated ones ...
> +	 */
> +	list_del(&oe->buffer->list);
> +	ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
> +
> +	/* ... and continue with the rest */
> +	list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> +		list_del(&buffer->list);
> +		ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
>  	}
>  }
>  
> diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
> index 8c7a2948593e..1338d5c345dc 100644
> --- a/tools/perf/util/ordered-events.h
> +++ b/tools/perf/util/ordered-events.h
> @@ -25,23 +25,28 @@ struct ordered_events;
>  typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
>  					 struct ordered_event *event);
>  
> +struct ordered_events_buffer {
> +	struct list_head	list;
> +	struct ordered_event	event[0];
> +};
> +
>  struct ordered_events {
> -	u64			last_flush;
> -	u64			next_flush;
> -	u64			max_timestamp;
> -	u64			max_alloc_size;
> -	u64			cur_alloc_size;
> -	struct list_head	events;
> -	struct list_head	cache;
> -	struct list_head	to_free;
> -	struct ordered_event	*buffer;
> -	struct ordered_event	*last;
> -	ordered_events__deliver_t deliver;
> -	int			buffer_idx;
> -	unsigned int		nr_events;
> -	enum oe_flush		last_flush_type;
> -	u32			nr_unordered_events;
> -	bool                    copy_on_queue;
> +	u64				 last_flush;
> +	u64				 next_flush;
> +	u64				 max_timestamp;
> +	u64				 max_alloc_size;
> +	u64				 cur_alloc_size;
> +	struct list_head		 events;
> +	struct list_head		 cache;
> +	struct list_head		 to_free;
> +	struct ordered_events_buffer	*buffer;
> +	struct ordered_event		*last;
> +	ordered_events__deliver_t	 deliver;
> +	int				 buffer_idx;
> +	unsigned int			 nr_events;
> +	enum oe_flush			 last_flush_type;
> +	u32				 nr_unordered_events;
> +	bool				 copy_on_queue;
>  };
>  
>  int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCHv2] perf tools: Add struct ordered_events_buffer layer
  2018-08-27  9:28             ` [PATCHv2] " Jiri Olsa
  2018-08-27 11:07               ` Namhyung Kim
@ 2018-08-27 15:24               ` Stephane Eranian
  2018-08-27 17:05                 ` [PATCHv3] " Jiri Olsa
  1 sibling, 1 reply; 15+ messages in thread
From: Stephane Eranian @ 2018-08-27 15:24 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo, Namhyung Kim

Jiri,


On Mon, Aug 27, 2018 at 2:28 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Wed, Aug 15, 2018 at 10:48:25AM +0200, Jiri Olsa wrote:
> > On Tue, Aug 14, 2018 at 12:14:19AM -0700, Stephane Eranian wrote:
> >
> > SNIP
> >
> > > > @@ -104,11 +110,12 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> > > >                 new = list_entry(cache->next, struct ordered_event, list);
> > > >                 list_del(&new->list);
> > > >         } else if (oe->buffer) {
> > > > -               new = oe->buffer + oe->buffer_idx;
> > > > +               new = &oe->buffer->event[oe->buffer_idx];
> > > >                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
> > > >                         oe->buffer = NULL;
> > > >         } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> > > > -               size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> > > > +               size_t size = sizeof(*oe->buffer) +
> > > > +                             MAX_SAMPLE_BUFFER * sizeof(*new);
> > > >
> > > >                 oe->buffer = malloc(size);
> > > >                 if (!oe->buffer) {
> > > > @@ -122,9 +129,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> > > >                 oe->cur_alloc_size += size;
> > > >                 list_add(&oe->buffer->list, &oe->to_free);
> > > >
> > > > -               /* First entry is abused to maintain the to_free list. */
> > > > -               oe->buffer_idx = 2;
> > > > -               new = oe->buffer + 1;
> > > > +               oe->buffer_idx = 1;
> > > > +               new = &oe->buffer->event[0];
> > >
> > > Ok, but I think this section between the malloc() and the line above
> > > needs some comments to clarify what is going on.
> > > It is still hard to read.
> >
> > ok, I put some bigger comment at the top, but I'm not too happy
> > feel free to suggest different one ;-)
> >
> > >
> > > >         } else {
> > > >                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> > > >         }
> > > > @@ -300,15 +306,27 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
> > > >         oe->deliver        = deliver;
> > > >  }
> > > >
> > > > +static void
> > > > +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> > > > +                           struct ordered_events *oe)
> > > > +{
> > > > +       if (oe->copy_on_queue) {
> > > > +               unsigned int i;
> > > > +
> > > > +               for (i = 0; i < MAX_SAMPLE_BUFFER; i++)
> > > > +                       __free_dup_event(oe, buffer->event[i].event);
> > > > +       }
> > > > +
> > > I have a problem with this one, given that the buffer->event[] is
> > > never actually zeroed.
> > > So what happens if you do not use all the entries by the time you have to free?
> > > I think one way to avoid this is by iterating only all the way to
> > > oe->buffer_idx.
> >
> > right, please check attached patch
>
> any comments? attaching v2
>
> thanks,
> jirka
>
>
> ---
> When ordering events, we use preallocated buffers to store separated
> events. Those buffers currently don't have their own struct, but since
> they are basically array of 'struct ordered_event' objects, we use the
> first event to hold buffers data - list head, that holds all buffers
> together:
>
>    struct ordered_events {
>      ...
>      struct ordered_event *buffer;
>      ...
>    };
>
>    struct ordered_event {
>      u64               timestamp;
>      u64               file_offset;
>      union perf_event  *event;
>      struct list_head  list;
>    };
>
> This is quite convoluted and error prone as demonstrated by
> free-ing issue discovered and fixed by Stephane in here [1].
>
> This patch adds the 'struct ordered_events_buffer' object,
> that holds the buffer data and frees it up properly.
>
> [1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2
>
> Reported-by: Stephane Eranian <eranian@google.com>
> Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/perf/util/ordered-events.c | 82 +++++++++++++++++++++++++++-----
>  tools/perf/util/ordered-events.h | 37 +++++++-------
>  2 files changed, 90 insertions(+), 29 deletions(-)
>
> diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
> index bad9e0296e9a..3672060508a7 100644
> --- a/tools/perf/util/ordered-events.c
> +++ b/tools/perf/util/ordered-events.c
> @@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
>         return oe->copy_on_queue ? __dup_event(oe, event) : event;
>  }
>
> -static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
>  {
> -       if (event && oe->copy_on_queue) {
> +       if (event) {
>                 oe->cur_alloc_size -= event->header.size;
>                 free(event);
>         }
>  }
>
> +static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +{
> +       if (oe->copy_on_queue)
> +               __free_dup_event(oe, event);
> +}
> +
>  #define MAX_SAMPLE_BUFFER      (64 * 1024 / sizeof(struct ordered_event))
>  static struct ordered_event *alloc_event(struct ordered_events *oe,
>                                          union perf_event *event)
> @@ -100,15 +106,43 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>         if (!new_event)
>                 return NULL;
>
> +       /*
> +        * We maintain following scheme of buffers for ordered
> +        * event allocation:
> +        *
> +        *   to_free list -> buffer1 (64K)
> +        *                   buffer2 (64K)
> +        *                   ...
> +        *
> +        * Each buffer keeps an array of ordered events objects:
> +        *    buffer -> event[0]
> +        *              event[1]
> +        *              ...
> +        *
> +        * Each allocated ordered event is linked to one of
> +        * following lists:
> +        *   - time ordered list 'events'
> +        *   - list of currently removed events 'cache'
> +        *
> +        * Allocation of the ordered event uses following order
> +        * to get the memory:
> +        *   - use recently removed object from 'cache' list
> +        *   - use available object in current allocation buffer
> +        *   - allocate new buffer if the current buffer is full
> +        *
> +        * Removal of ordered event object moves it from events to
> +        * the cache list.
> +        */
>         if (!list_empty(cache)) {
>                 new = list_entry(cache->next, struct ordered_event, list);
>                 list_del(&new->list);
>         } else if (oe->buffer) {
> -               new = oe->buffer + oe->buffer_idx;
> +               new = &oe->buffer->event[oe->buffer_idx];
>                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
>                         oe->buffer = NULL;
>         } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> -               size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> +               size_t size = sizeof(*oe->buffer) +
> +                             MAX_SAMPLE_BUFFER * sizeof(*new);
>
>                 oe->buffer = malloc(size);
>                 if (!oe->buffer) {
> @@ -122,9 +156,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>                 oe->cur_alloc_size += size;
>                 list_add(&oe->buffer->list, &oe->to_free);
>
> -               /* First entry is abused to maintain the to_free list. */
> -               oe->buffer_idx = 2;
> -               new = oe->buffer + 1;
> +               oe->buffer_idx = 1;
> +               new = &oe->buffer->event[0];
>         } else {
>                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);


I am wondering about the usefulness of returning a new_event with
new_event->event = NULL
in this case. Don't you need new_event->event? If so, then you need return NULL.

>         }
> @@ -300,15 +333,38 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
>         oe->deliver        = deliver;
>  }
>
> +static void
> +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> +                           unsigned int max, struct ordered_events *oe)
> +{
> +       if (oe->copy_on_queue) {
> +               unsigned int i;
> +
> +               for (i = 0; i < max; i++)
> +                       __free_dup_event(oe, buffer->event[i].event);
> +       }
> +
> +       free(buffer);
> +}
> +
>  void ordered_events__free(struct ordered_events *oe)
>  {
> -       while (!list_empty(&oe->to_free)) {
> -               struct ordered_event *event;
> +       struct ordered_events_buffer *buffer, *tmp;
>
> -               event = list_entry(oe->to_free.next, struct ordered_event, list);
> -               list_del(&event->list);
> -               free_dup_event(oe, event->event);
> -               free(event);
> +       if (list_empty(&oe->to_free))
> +               return;
> +
> +       /*
> +        * Current buffer might not have all the events allocated
> +        * yet, we need to free only allocated ones ...
> +        */
> +       list_del(&oe->buffer->list);
> +       ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
> +
> +       /* ... and continue with the rest */
> +       list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> +               list_del(&buffer->list);
> +               ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);


Here you are saying that if it is on the to_free list and not the
current buffer, then necessarily
all the entries have been used and it is safe to use
MAX_SAMPLE_BUFFER. Is that right?

>         }
>  }
>
> diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
> index 8c7a2948593e..1338d5c345dc 100644
> --- a/tools/perf/util/ordered-events.h
> +++ b/tools/perf/util/ordered-events.h
> @@ -25,23 +25,28 @@ struct ordered_events;
>  typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
>                                          struct ordered_event *event);
>
> +struct ordered_events_buffer {
> +       struct list_head        list;
> +       struct ordered_event    event[0];
> +};
> +
>  struct ordered_events {
> -       u64                     last_flush;
> -       u64                     next_flush;
> -       u64                     max_timestamp;
> -       u64                     max_alloc_size;
> -       u64                     cur_alloc_size;
> -       struct list_head        events;
> -       struct list_head        cache;
> -       struct list_head        to_free;
> -       struct ordered_event    *buffer;
> -       struct ordered_event    *last;
> -       ordered_events__deliver_t deliver;
> -       int                     buffer_idx;
> -       unsigned int            nr_events;
> -       enum oe_flush           last_flush_type;
> -       u32                     nr_unordered_events;
> -       bool                    copy_on_queue;
> +       u64                              last_flush;
> +       u64                              next_flush;
> +       u64                              max_timestamp;
> +       u64                              max_alloc_size;
> +       u64                              cur_alloc_size;
> +       struct list_head                 events;
> +       struct list_head                 cache;
> +       struct list_head                 to_free;
> +       struct ordered_events_buffer    *buffer;
> +       struct ordered_event            *last;
> +       ordered_events__deliver_t        deliver;
> +       int                              buffer_idx;
> +       unsigned int                     nr_events;
> +       enum oe_flush                    last_flush_type;
> +       u32                              nr_unordered_events;
> +       bool                             copy_on_queue;
>  };
>
>  int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCHv3] perf tools: Add struct ordered_events_buffer layer
  2018-08-27 15:24               ` Stephane Eranian
@ 2018-08-27 17:05                 ` " Jiri Olsa
  2018-09-02 14:47                   ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2018-08-27 17:05 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo, Namhyung Kim

On Mon, Aug 27, 2018 at 08:24:56AM -0700, Stephane Eranian wrote:

SNIP

> > -               /* First entry is abused to maintain the to_free list. */
> > -               oe->buffer_idx = 2;
> > -               new = oe->buffer + 1;
> > +               oe->buffer_idx = 1;
> > +               new = &oe->buffer->event[0];
> >         } else {
> >                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> 
> 
> I am wondering about the usefulness of returning a new_event with
> new_event->event = NULL
> in this case. Don't you need new_event->event? If so, then you need return NULL.

yep, that's a bug.. with new being NULL in here,
we'd get a crash anyway.. so 'return NULL;' it is

SNIP

> > +        * yet, we need to free only allocated ones ...
> > +        */
> > +       list_del(&oe->buffer->list);
> > +       ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
> > +
> > +       /* ... and continue with the rest */
> > +       list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> > +               list_del(&buffer->list);
> > +               ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
> 
> 
> Here you are saying that if it is on the to_free list and not the
> current buffer, then necessarily
> all the entries have been used and it is safe to use
> MAX_SAMPLE_BUFFER. Is that right?

yes, at this point they either holds an event or NULL
so it's free to call __free_dup_event on it

thanks, v3 attached

added also Namhyung's ack, as the 'return NULL' change wasn't
related to the v2 changes

jirka


---
When ordering events, we use preallocated buffers to store separated
events. Those buffers currently don't have their own struct, but since
they are basically array of 'struct ordered_event' objects, we use the
first event to hold buffers data - list head, that holds all buffers
together:

   struct ordered_events {
     ...
     struct ordered_event *buffer;
     ...
   };

   struct ordered_event {
     u64               timestamp;
     u64               file_offset;
     union perf_event  *event;
     struct list_head  list;
   };

This is quite convoluted and error prone as demonstrated by
free-ing issue discovered and fixed by Stephane in here [1].

This patch adds the 'struct ordered_events_buffer' object,
that holds the buffer data and frees it up properly.

[1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2

Reported-by: Stephane Eranian <eranian@google.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/perf/util/ordered-events.c | 83 +++++++++++++++++++++++++++-----
 tools/perf/util/ordered-events.h | 37 ++++++++------
 2 files changed, 91 insertions(+), 29 deletions(-)

diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
index bad9e0296e9a..87171e8fd70d 100644
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
 	return oe->copy_on_queue ? __dup_event(oe, event) : event;
 }
 
-static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
 {
-	if (event && oe->copy_on_queue) {
+	if (event) {
 		oe->cur_alloc_size -= event->header.size;
 		free(event);
 	}
 }
 
+static void free_dup_event(struct ordered_events *oe, union perf_event *event)
+{
+	if (oe->copy_on_queue)
+		__free_dup_event(oe, event);
+}
+
 #define MAX_SAMPLE_BUFFER	(64 * 1024 / sizeof(struct ordered_event))
 static struct ordered_event *alloc_event(struct ordered_events *oe,
 					 union perf_event *event)
@@ -100,15 +106,43 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 	if (!new_event)
 		return NULL;
 
+	/*
+	 * We maintain following scheme of buffers for ordered
+	 * event allocation:
+	 *
+	 *   to_free list -> buffer1 (64K)
+	 *                   buffer2 (64K)
+	 *                   ...
+	 *
+	 * Each buffer keeps an array of ordered events objects:
+	 *    buffer -> event[0]
+	 *              event[1]
+	 *              ...
+	 *
+	 * Each allocated ordered event is linked to one of
+	 * following lists:
+	 *   - time ordered list 'events'
+	 *   - list of currently removed events 'cache'
+	 *
+	 * Allocation of the ordered event uses following order
+	 * to get the memory:
+	 *   - use recently removed object from 'cache' list
+	 *   - use available object in current allocation buffer
+	 *   - allocate new buffer if the current buffer is full
+	 *
+	 * Removal of ordered event object moves it from events to
+	 * the cache list.
+	 */
 	if (!list_empty(cache)) {
 		new = list_entry(cache->next, struct ordered_event, list);
 		list_del(&new->list);
 	} else if (oe->buffer) {
-		new = oe->buffer + oe->buffer_idx;
+		new = &oe->buffer->event[oe->buffer_idx];
 		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
 			oe->buffer = NULL;
 	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
-		size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
+		size_t size = sizeof(*oe->buffer) +
+			      MAX_SAMPLE_BUFFER * sizeof(*new);
 
 		oe->buffer = malloc(size);
 		if (!oe->buffer) {
@@ -122,11 +156,11 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		oe->cur_alloc_size += size;
 		list_add(&oe->buffer->list, &oe->to_free);
 
-		/* First entry is abused to maintain the to_free list. */
-		oe->buffer_idx = 2;
-		new = oe->buffer + 1;
+		oe->buffer_idx = 1;
+		new = &oe->buffer->event[0];
 	} else {
 		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
+		return NULL;
 	}
 
 	new->event = new_event;
@@ -300,15 +334,38 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
 	oe->deliver	   = deliver;
 }
 
+static void
+ordered_events_buffer__free(struct ordered_events_buffer *buffer,
+			    unsigned int max, struct ordered_events *oe)
+{
+	if (oe->copy_on_queue) {
+		unsigned int i;
+
+		for (i = 0; i < max; i++)
+			__free_dup_event(oe, buffer->event[i].event);
+	}
+
+	free(buffer);
+}
+
 void ordered_events__free(struct ordered_events *oe)
 {
-	while (!list_empty(&oe->to_free)) {
-		struct ordered_event *event;
+	struct ordered_events_buffer *buffer, *tmp;
 
-		event = list_entry(oe->to_free.next, struct ordered_event, list);
-		list_del(&event->list);
-		free_dup_event(oe, event->event);
-		free(event);
+	if (list_empty(&oe->to_free))
+		return;
+
+	/*
+	 * Current buffer might not have all the events allocated
+	 * yet, we need to free only allocated ones ...
+	 */
+	list_del(&oe->buffer->list);
+	ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
+
+	/* ... and continue with the rest */
+	list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
+		list_del(&buffer->list);
+		ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
 	}
 }
 
diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
index 8c7a2948593e..1338d5c345dc 100644
--- a/tools/perf/util/ordered-events.h
+++ b/tools/perf/util/ordered-events.h
@@ -25,23 +25,28 @@ struct ordered_events;
 typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
 					 struct ordered_event *event);
 
+struct ordered_events_buffer {
+	struct list_head	list;
+	struct ordered_event	event[0];
+};
+
 struct ordered_events {
-	u64			last_flush;
-	u64			next_flush;
-	u64			max_timestamp;
-	u64			max_alloc_size;
-	u64			cur_alloc_size;
-	struct list_head	events;
-	struct list_head	cache;
-	struct list_head	to_free;
-	struct ordered_event	*buffer;
-	struct ordered_event	*last;
-	ordered_events__deliver_t deliver;
-	int			buffer_idx;
-	unsigned int		nr_events;
-	enum oe_flush		last_flush_type;
-	u32			nr_unordered_events;
-	bool                    copy_on_queue;
+	u64				 last_flush;
+	u64				 next_flush;
+	u64				 max_timestamp;
+	u64				 max_alloc_size;
+	u64				 cur_alloc_size;
+	struct list_head		 events;
+	struct list_head		 cache;
+	struct list_head		 to_free;
+	struct ordered_events_buffer	*buffer;
+	struct ordered_event		*last;
+	ordered_events__deliver_t	 deliver;
+	int				 buffer_idx;
+	unsigned int			 nr_events;
+	enum oe_flush			 last_flush_type;
+	u32				 nr_unordered_events;
+	bool				 copy_on_queue;
 };
 
 int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
-- 
2.17.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCHv3] perf tools: Add struct ordered_events_buffer layer
  2018-08-27 17:05                 ` [PATCHv3] " Jiri Olsa
@ 2018-09-02 14:47                   ` Jiri Olsa
  2018-09-04  2:37                     ` Stephane Eranian
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2018-09-02 14:47 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo, Namhyung Kim

On Mon, Aug 27, 2018 at 07:05:43PM +0200, Jiri Olsa wrote:
> On Mon, Aug 27, 2018 at 08:24:56AM -0700, Stephane Eranian wrote:
> 
> SNIP
> 
> > > -               /* First entry is abused to maintain the to_free list. */
> > > -               oe->buffer_idx = 2;
> > > -               new = oe->buffer + 1;
> > > +               oe->buffer_idx = 1;
> > > +               new = &oe->buffer->event[0];
> > >         } else {
> > >                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> > 
> > 
> > I am wondering about the usefulness of returning a new_event with
> > new_event->event = NULL
> > in this case. Don't you need new_event->event? If so, then you need return NULL.
> 
> yep, that's a bug.. with new being NULL in here,
> we'd get a crash anyway.. so 'return NULL;' it is
> 
> SNIP
> 
> > > +        * yet, we need to free only allocated ones ...
> > > +        */
> > > +       list_del(&oe->buffer->list);
> > > +       ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
> > > +
> > > +       /* ... and continue with the rest */
> > > +       list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> > > +               list_del(&buffer->list);
> > > +               ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
> > 
> > 
> > Here you are saying that if it is on the to_free list and not the
> > current buffer, then necessarily
> > all the entries have been used and it is safe to use
> > MAX_SAMPLE_BUFFER. Is that right?
> 
> yes, at this point they either holds an event or NULL
> so it's free to call __free_dup_event on it
> 
> thanks, v3 attached
> 
> added also Namhyung's ack, as the 'return NULL' change wasn't
> related to the v2 changes
> 
> jirka

Stephane,
any comments to v3 version?

thanks,
jirka

> 
> 
> ---
> When ordering events, we use preallocated buffers to store separated
> events. Those buffers currently don't have their own struct, but since
> they are basically array of 'struct ordered_event' objects, we use the
> first event to hold buffers data - list head, that holds all buffers
> together:
> 
>    struct ordered_events {
>      ...
>      struct ordered_event *buffer;
>      ...
>    };
> 
>    struct ordered_event {
>      u64               timestamp;
>      u64               file_offset;
>      union perf_event  *event;
>      struct list_head  list;
>    };
> 
> This is quite convoluted and error prone as demonstrated by
> free-ing issue discovered and fixed by Stephane in here [1].
> 
> This patch adds the 'struct ordered_events_buffer' object,
> that holds the buffer data and frees it up properly.
> 
> [1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2
> 
> Reported-by: Stephane Eranian <eranian@google.com>
> Acked-by: Namhyung Kim <namhyung@kernel.org>
> Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/perf/util/ordered-events.c | 83 +++++++++++++++++++++++++++-----
>  tools/perf/util/ordered-events.h | 37 ++++++++------
>  2 files changed, 91 insertions(+), 29 deletions(-)
> 
> diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
> index bad9e0296e9a..87171e8fd70d 100644
> --- a/tools/perf/util/ordered-events.c
> +++ b/tools/perf/util/ordered-events.c
> @@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
>  	return oe->copy_on_queue ? __dup_event(oe, event) : event;
>  }
>  
> -static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
>  {
> -	if (event && oe->copy_on_queue) {
> +	if (event) {
>  		oe->cur_alloc_size -= event->header.size;
>  		free(event);
>  	}
>  }
>  
> +static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> +{
> +	if (oe->copy_on_queue)
> +		__free_dup_event(oe, event);
> +}
> +
>  #define MAX_SAMPLE_BUFFER	(64 * 1024 / sizeof(struct ordered_event))
>  static struct ordered_event *alloc_event(struct ordered_events *oe,
>  					 union perf_event *event)
> @@ -100,15 +106,43 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>  	if (!new_event)
>  		return NULL;
>  
> +	/*
> +	 * We maintain following scheme of buffers for ordered
> +	 * event allocation:
> +	 *
> +	 *   to_free list -> buffer1 (64K)
> +	 *                   buffer2 (64K)
> +	 *                   ...
> +	 *
> +	 * Each buffer keeps an array of ordered events objects:
> +	 *    buffer -> event[0]
> +	 *              event[1]
> +	 *              ...
> +	 *
> +	 * Each allocated ordered event is linked to one of
> +	 * following lists:
> +	 *   - time ordered list 'events'
> +	 *   - list of currently removed events 'cache'
> +	 *
> +	 * Allocation of the ordered event uses following order
> +	 * to get the memory:
> +	 *   - use recently removed object from 'cache' list
> +	 *   - use available object in current allocation buffer
> +	 *   - allocate new buffer if the current buffer is full
> +	 *
> +	 * Removal of ordered event object moves it from events to
> +	 * the cache list.
> +	 */
>  	if (!list_empty(cache)) {
>  		new = list_entry(cache->next, struct ordered_event, list);
>  		list_del(&new->list);
>  	} else if (oe->buffer) {
> -		new = oe->buffer + oe->buffer_idx;
> +		new = &oe->buffer->event[oe->buffer_idx];
>  		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
>  			oe->buffer = NULL;
>  	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
> -		size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> +		size_t size = sizeof(*oe->buffer) +
> +			      MAX_SAMPLE_BUFFER * sizeof(*new);
>  
>  		oe->buffer = malloc(size);
>  		if (!oe->buffer) {
> @@ -122,11 +156,11 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>  		oe->cur_alloc_size += size;
>  		list_add(&oe->buffer->list, &oe->to_free);
>  
> -		/* First entry is abused to maintain the to_free list. */
> -		oe->buffer_idx = 2;
> -		new = oe->buffer + 1;
> +		oe->buffer_idx = 1;
> +		new = &oe->buffer->event[0];
>  	} else {
>  		pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> +		return NULL;
>  	}
>  
>  	new->event = new_event;
> @@ -300,15 +334,38 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
>  	oe->deliver	   = deliver;
>  }
>  
> +static void
> +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> +			    unsigned int max, struct ordered_events *oe)
> +{
> +	if (oe->copy_on_queue) {
> +		unsigned int i;
> +
> +		for (i = 0; i < max; i++)
> +			__free_dup_event(oe, buffer->event[i].event);
> +	}
> +
> +	free(buffer);
> +}
> +
>  void ordered_events__free(struct ordered_events *oe)
>  {
> -	while (!list_empty(&oe->to_free)) {
> -		struct ordered_event *event;
> +	struct ordered_events_buffer *buffer, *tmp;
>  
> -		event = list_entry(oe->to_free.next, struct ordered_event, list);
> -		list_del(&event->list);
> -		free_dup_event(oe, event->event);
> -		free(event);
> +	if (list_empty(&oe->to_free))
> +		return;
> +
> +	/*
> +	 * Current buffer might not have all the events allocated
> +	 * yet, we need to free only allocated ones ...
> +	 */
> +	list_del(&oe->buffer->list);
> +	ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
> +
> +	/* ... and continue with the rest */
> +	list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> +		list_del(&buffer->list);
> +		ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
>  	}
>  }
>  
> diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
> index 8c7a2948593e..1338d5c345dc 100644
> --- a/tools/perf/util/ordered-events.h
> +++ b/tools/perf/util/ordered-events.h
> @@ -25,23 +25,28 @@ struct ordered_events;
>  typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
>  					 struct ordered_event *event);
>  
> +struct ordered_events_buffer {
> +	struct list_head	list;
> +	struct ordered_event	event[0];
> +};
> +
>  struct ordered_events {
> -	u64			last_flush;
> -	u64			next_flush;
> -	u64			max_timestamp;
> -	u64			max_alloc_size;
> -	u64			cur_alloc_size;
> -	struct list_head	events;
> -	struct list_head	cache;
> -	struct list_head	to_free;
> -	struct ordered_event	*buffer;
> -	struct ordered_event	*last;
> -	ordered_events__deliver_t deliver;
> -	int			buffer_idx;
> -	unsigned int		nr_events;
> -	enum oe_flush		last_flush_type;
> -	u32			nr_unordered_events;
> -	bool                    copy_on_queue;
> +	u64				 last_flush;
> +	u64				 next_flush;
> +	u64				 max_timestamp;
> +	u64				 max_alloc_size;
> +	u64				 cur_alloc_size;
> +	struct list_head		 events;
> +	struct list_head		 cache;
> +	struct list_head		 to_free;
> +	struct ordered_events_buffer	*buffer;
> +	struct ordered_event		*last;
> +	ordered_events__deliver_t	 deliver;
> +	int				 buffer_idx;
> +	unsigned int			 nr_events;
> +	enum oe_flush			 last_flush_type;
> +	u32				 nr_unordered_events;
> +	bool				 copy_on_queue;
>  };
>  
>  int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
> -- 
> 2.17.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCHv3] perf tools: Add struct ordered_events_buffer layer
  2018-09-02 14:47                   ` Jiri Olsa
@ 2018-09-04  2:37                     ` Stephane Eranian
  2018-09-06 13:28                       ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Stephane Eranian @ 2018-09-04  2:37 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo, Namhyung Kim

Jiri,

On Sun, Sep 2, 2018 at 7:47 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Mon, Aug 27, 2018 at 07:05:43PM +0200, Jiri Olsa wrote:
> > On Mon, Aug 27, 2018 at 08:24:56AM -0700, Stephane Eranian wrote:
> >
> > SNIP
> >
> > > > -               /* First entry is abused to maintain the to_free list. */
> > > > -               oe->buffer_idx = 2;
> > > > -               new = oe->buffer + 1;
> > > > +               oe->buffer_idx = 1;
> > > > +               new = &oe->buffer->event[0];
> > > >         } else {
> > > >                 pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> > >
> > >
> > > I am wondering about the usefulness of returning a new_event with
> > > new_event->event = NULL
> > > in this case. Don't you need new_event->event? If so, then you need return NULL.
> >
> > yep, that's a bug.. with new being NULL in here,
> > we'd get a crash anyway.. so 'return NULL;' it is
> >
> > SNIP
> >
> > > > +        * yet, we need to free only allocated ones ...
> > > > +        */
> > > > +       list_del(&oe->buffer->list);
> > > > +       ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
> > > > +
> > > > +       /* ... and continue with the rest */
> > > > +       list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> > > > +               list_del(&buffer->list);
> > > > +               ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
> > >
> > >
> > > Here you are saying that if it is on the to_free list and not the
> > > current buffer, then necessarily
> > > all the entries have been used and it is safe to use
> > > MAX_SAMPLE_BUFFER. Is that right?
> >
> > yes, at this point they either holds an event or NULL
> > so it's free to call __free_dup_event on it
> >
> > thanks, v3 attached
> >
> > added also Namhyung's ack, as the 'return NULL' change wasn't
> > related to the v2 changes
> >

I think the code is correct now for the issue related to uninitialized pointer.
But there is still one problem I found stressing the code with max_alloc_size.
The way the following is written:

       if (!list_empty(cache)) {
                new = list_entry(cache->next, struct ordered_event, list);
                list_del(&new->list);
        } else if (oe->buffer) {
                new = oe->buffer + oe->buffer_idx;
                if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
                        oe->buffer = NULL;
        } else if (oe->cur_alloc_size < oe->max_alloc_size) {
                size_t size = sizeof(*oe->buffer) MAX_SAMPLE_BUFFER *
sizeof(*new);

                oe->buffer = malloc(size);
                if (!oe->buffer) {
                        free_dup_event(oe, new_event);
                        return NULL;
                }

                pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n",
                   oe->cur_alloc_size, size, oe->max_alloc_size);

                oe->cur_alloc_size += size;

You can end up with oe->cur_alloc_size > oe->max_alloc_size in case
the max limit is
really low (< size_t size = sizeof (*oe->buffer) + MAX_SAMPLE_BUFFER *
sizeof(*new);
So I think to make sure you can never allocate more than the max, you
have to do:

      size_t size = sizeof(*oe->buffer) MAX_SAMPLE_BUFFER * sizeof(*new);
       if (!list_empty(cache)) {
                new = list_entry(cache->next, struct ordered_event, list);
                list_del(&new->list);
        } else if (oe->buffer) {
                new = oe->buffer + oe->buffer_idx;
                if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
                        oe->buffer = NULL;
        } else if ((oe->cur_alloc_size + size) < oe->max_alloc_size) {

Then you will never allocate more than the max.
I think with this change, we are okay.
Tested-by: Stephane Eranian <eranian@google.com>

 > jirka
>
> Stephane,
> any comments to v3 version?
>
> thanks,
> jirka
>
> >
> >
> > ---
> > When ordering events, we use preallocated buffers to store separated
> > events. Those buffers currently don't have their own struct, but since
> > they are basically array of 'struct ordered_event' objects, we use the
> > first event to hold buffers data - list head, that holds all buffers
> > together:
> >
> >    struct ordered_events {
> >      ...
> >      struct ordered_event *buffer;
> >      ...
> >    };
> >
> >    struct ordered_event {
> >      u64               timestamp;
> >      u64               file_offset;
> >      union perf_event  *event;
> >      struct list_head  list;
> >    };
> >
> > This is quite convoluted and error prone as demonstrated by
> > free-ing issue discovered and fixed by Stephane in here [1].
> >
> > This patch adds the 'struct ordered_events_buffer' object,
> > that holds the buffer data and frees it up properly.
> >
> > [1] - https://marc.info/?l=linux-kernel&m=153376761329335&w=2
> >
> > Reported-by: Stephane Eranian <eranian@google.com>
> > Acked-by: Namhyung Kim <namhyung@kernel.org>
> > Link: http://lkml.kernel.org/n/tip-qrkcqm5m1sugy4q83pfn5a1r@git.kernel.org
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  tools/perf/util/ordered-events.c | 83 +++++++++++++++++++++++++++-----
> >  tools/perf/util/ordered-events.h | 37 ++++++++------
> >  2 files changed, 91 insertions(+), 29 deletions(-)
> >
> > diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
> > index bad9e0296e9a..87171e8fd70d 100644
> > --- a/tools/perf/util/ordered-events.c
> > +++ b/tools/perf/util/ordered-events.c
> > @@ -80,14 +80,20 @@ static union perf_event *dup_event(struct ordered_events *oe,
> >       return oe->copy_on_queue ? __dup_event(oe, event) : event;
> >  }
> >
> > -static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> > +static void __free_dup_event(struct ordered_events *oe, union perf_event *event)
> >  {
> > -     if (event && oe->copy_on_queue) {
> > +     if (event) {
> >               oe->cur_alloc_size -= event->header.size;
> >               free(event);
> >       }
> >  }
> >
> > +static void free_dup_event(struct ordered_events *oe, union perf_event *event)
> > +{
> > +     if (oe->copy_on_queue)
> > +             __free_dup_event(oe, event);
> > +}
> > +
> >  #define MAX_SAMPLE_BUFFER    (64 * 1024 / sizeof(struct ordered_event))
> >  static struct ordered_event *alloc_event(struct ordered_events *oe,
> >                                        union perf_event *event)
> > @@ -100,15 +106,43 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> >       if (!new_event)
> >               return NULL;
> >
> > +     /*
> > +      * We maintain following scheme of buffers for ordered
> > +      * event allocation:
> > +      *
> > +      *   to_free list -> buffer1 (64K)
> > +      *                   buffer2 (64K)
> > +      *                   ...
> > +      *
> > +      * Each buffer keeps an array of ordered events objects:
> > +      *    buffer -> event[0]
> > +      *              event[1]
> > +      *              ...
> > +      *
> > +      * Each allocated ordered event is linked to one of
> > +      * following lists:
> > +      *   - time ordered list 'events'
> > +      *   - list of currently removed events 'cache'
> > +      *
> > +      * Allocation of the ordered event uses following order
> > +      * to get the memory:
> > +      *   - use recently removed object from 'cache' list
> > +      *   - use available object in current allocation buffer
> > +      *   - allocate new buffer if the current buffer is full
> > +      *
> > +      * Removal of ordered event object moves it from events to
> > +      * the cache list.
> > +      */
> >       if (!list_empty(cache)) {
> >               new = list_entry(cache->next, struct ordered_event, list);
> >               list_del(&new->list);
> >       } else if (oe->buffer) {
> > -             new = oe->buffer + oe->buffer_idx;
> > +             new = &oe->buffer->event[oe->buffer_idx];
> >               if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
> >                       oe->buffer = NULL;
> >       } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> > -             size_t size = MAX_SAMPLE_BUFFER * sizeof(*new);
> > +             size_t size = sizeof(*oe->buffer) +
> > +                           MAX_SAMPLE_BUFFER * sizeof(*new);
> >
> >               oe->buffer = malloc(size);
> >               if (!oe->buffer) {
> > @@ -122,11 +156,11 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
> >               oe->cur_alloc_size += size;
> >               list_add(&oe->buffer->list, &oe->to_free);
> >
> > -             /* First entry is abused to maintain the to_free list. */
> > -             oe->buffer_idx = 2;
> > -             new = oe->buffer + 1;
> > +             oe->buffer_idx = 1;
> > +             new = &oe->buffer->event[0];
> >       } else {
> >               pr("allocation limit reached %" PRIu64 "B\n", oe->max_alloc_size);
> > +             return NULL;
> >       }
> >
> >       new->event = new_event;
> > @@ -300,15 +334,38 @@ void ordered_events__init(struct ordered_events *oe, ordered_events__deliver_t d
> >       oe->deliver        = deliver;
> >  }
> >
> > +static void
> > +ordered_events_buffer__free(struct ordered_events_buffer *buffer,
> > +                         unsigned int max, struct ordered_events *oe)
> > +{
> > +     if (oe->copy_on_queue) {
> > +             unsigned int i;
> > +
> > +             for (i = 0; i < max; i++)
> > +                     __free_dup_event(oe, buffer->event[i].event);
> > +     }
> > +
> > +     free(buffer);
> > +}
> > +
> >  void ordered_events__free(struct ordered_events *oe)
> >  {
> > -     while (!list_empty(&oe->to_free)) {
> > -             struct ordered_event *event;
> > +     struct ordered_events_buffer *buffer, *tmp;
> >
> > -             event = list_entry(oe->to_free.next, struct ordered_event, list);
> > -             list_del(&event->list);
> > -             free_dup_event(oe, event->event);
> > -             free(event);
> > +     if (list_empty(&oe->to_free))
> > +             return;
> > +
> > +     /*
> > +      * Current buffer might not have all the events allocated
> > +      * yet, we need to free only allocated ones ...
> > +      */
> > +     list_del(&oe->buffer->list);
> > +     ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);
> > +
> > +     /* ... and continue with the rest */
> > +     list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
> > +             list_del(&buffer->list);
> > +             ordered_events_buffer__free(buffer, MAX_SAMPLE_BUFFER, oe);
> >       }
> >  }
> >
> > diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
> > index 8c7a2948593e..1338d5c345dc 100644
> > --- a/tools/perf/util/ordered-events.h
> > +++ b/tools/perf/util/ordered-events.h
> > @@ -25,23 +25,28 @@ struct ordered_events;
> >  typedef int (*ordered_events__deliver_t)(struct ordered_events *oe,
> >                                        struct ordered_event *event);
> >
> > +struct ordered_events_buffer {
> > +     struct list_head        list;
> > +     struct ordered_event    event[0];
> > +};
> > +
> >  struct ordered_events {
> > -     u64                     last_flush;
> > -     u64                     next_flush;
> > -     u64                     max_timestamp;
> > -     u64                     max_alloc_size;
> > -     u64                     cur_alloc_size;
> > -     struct list_head        events;
> > -     struct list_head        cache;
> > -     struct list_head        to_free;
> > -     struct ordered_event    *buffer;
> > -     struct ordered_event    *last;
> > -     ordered_events__deliver_t deliver;
> > -     int                     buffer_idx;
> > -     unsigned int            nr_events;
> > -     enum oe_flush           last_flush_type;
> > -     u32                     nr_unordered_events;
> > -     bool                    copy_on_queue;
> > +     u64                              last_flush;
> > +     u64                              next_flush;
> > +     u64                              max_timestamp;
> > +     u64                              max_alloc_size;
> > +     u64                              cur_alloc_size;
> > +     struct list_head                 events;
> > +     struct list_head                 cache;
> > +     struct list_head                 to_free;
> > +     struct ordered_events_buffer    *buffer;
> > +     struct ordered_event            *last;
> > +     ordered_events__deliver_t        deliver;
> > +     int                              buffer_idx;
> > +     unsigned int                     nr_events;
> > +     enum oe_flush                    last_flush_type;
> > +     u32                              nr_unordered_events;
> > +     bool                             copy_on_queue;
> >  };
> >
> >  int ordered_events__queue(struct ordered_events *oe, union perf_event *event,
> > --
> > 2.17.1

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCHv3] perf tools: Add struct ordered_events_buffer layer
  2018-09-04  2:37                     ` Stephane Eranian
@ 2018-09-06 13:28                       ` Jiri Olsa
  2018-09-06 15:04                         ` Stephane Eranian
  0 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2018-09-06 13:28 UTC (permalink / raw)
  To: Stephane Eranian
  Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo, Namhyung Kim

On Mon, Sep 03, 2018 at 07:37:56PM -0700, Stephane Eranian wrote:

SNIP

> 
> I think the code is correct now for the issue related to uninitialized pointer.
> But there is still one problem I found stressing the code with max_alloc_size.
> The way the following is written:
> 
>        if (!list_empty(cache)) {
>                 new = list_entry(cache->next, struct ordered_event, list);
>                 list_del(&new->list);
>         } else if (oe->buffer) {
>                 new = oe->buffer + oe->buffer_idx;
>                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
>                         oe->buffer = NULL;
>         } else if (oe->cur_alloc_size < oe->max_alloc_size) {
>                 size_t size = sizeof(*oe->buffer) MAX_SAMPLE_BUFFER *
> sizeof(*new);
> 
>                 oe->buffer = malloc(size);
>                 if (!oe->buffer) {
>                         free_dup_event(oe, new_event);
>                         return NULL;
>                 }
> 
>                 pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n",
>                    oe->cur_alloc_size, size, oe->max_alloc_size);
> 
>                 oe->cur_alloc_size += size;
> 
> You can end up with oe->cur_alloc_size > oe->max_alloc_size in case
> the max limit is
> really low (< size_t size = sizeof (*oe->buffer) + MAX_SAMPLE_BUFFER *
> sizeof(*new);
> So I think to make sure you can never allocate more than the max, you
> have to do:
> 
>       size_t size = sizeof(*oe->buffer) MAX_SAMPLE_BUFFER * sizeof(*new);
>        if (!list_empty(cache)) {
>                 new = list_entry(cache->next, struct ordered_event, list);
>                 list_del(&new->list);
>         } else if (oe->buffer) {
>                 new = oe->buffer + oe->buffer_idx;
>                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
>                         oe->buffer = NULL;
>         } else if ((oe->cur_alloc_size + size) < oe->max_alloc_size) {
> 
> Then you will never allocate more than the max.
> I think with this change, we are okay.
> Tested-by: Stephane Eranian <eranian@google.com>

yep, makes sense.. something like below then
I'll post it on top of the previous patch

thanks,
jirka


---
diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
index 87171e8fd70d..2d1d0f3c8f77 100644
--- a/tools/perf/util/ordered-events.c
+++ b/tools/perf/util/ordered-events.c
@@ -101,6 +101,7 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 	struct list_head *cache = &oe->cache;
 	struct ordered_event *new = NULL;
 	union perf_event *new_event;
+	size_t size;
 
 	new_event = dup_event(oe, event);
 	if (!new_event)
@@ -133,6 +134,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 	 * Removal of ordered event object moves it from events to
 	 * the cache list.
 	 */
+	size = sizeof(*oe->buffer) + MAX_SAMPLE_BUFFER * sizeof(*new);
+
 	if (!list_empty(cache)) {
 		new = list_entry(cache->next, struct ordered_event, list);
 		list_del(&new->list);
@@ -140,10 +143,7 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
 		new = &oe->buffer->event[oe->buffer_idx];
 		if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
 			oe->buffer = NULL;
-	} else if (oe->cur_alloc_size < oe->max_alloc_size) {
-		size_t size = sizeof(*oe->buffer) +
-			      MAX_SAMPLE_BUFFER * sizeof(*new);
-
+	} else if ((oe->cur_alloc_size + size) < oe->max_alloc_size) {
 		oe->buffer = malloc(size);
 		if (!oe->buffer) {
 			free_dup_event(oe, new_event);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCHv3] perf tools: Add struct ordered_events_buffer layer
  2018-09-06 13:28                       ` Jiri Olsa
@ 2018-09-06 15:04                         ` Stephane Eranian
  0 siblings, 0 replies; 15+ messages in thread
From: Stephane Eranian @ 2018-09-06 15:04 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: LKML, Arnaldo Carvalho de Melo, Peter Zijlstra, mingo, Namhyung Kim

On Thu, Sep 6, 2018 at 6:29 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Mon, Sep 03, 2018 at 07:37:56PM -0700, Stephane Eranian wrote:
>
> SNIP
>
> >
> > I think the code is correct now for the issue related to uninitialized pointer.
> > But there is still one problem I found stressing the code with max_alloc_size.
> > The way the following is written:
> >
> >        if (!list_empty(cache)) {
> >                 new = list_entry(cache->next, struct ordered_event, list);
> >                 list_del(&new->list);
> >         } else if (oe->buffer) {
> >                 new = oe->buffer + oe->buffer_idx;
> >                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
> >                         oe->buffer = NULL;
> >         } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> >                 size_t size = sizeof(*oe->buffer) MAX_SAMPLE_BUFFER *
> > sizeof(*new);
> >
> >                 oe->buffer = malloc(size);
> >                 if (!oe->buffer) {
> >                         free_dup_event(oe, new_event);
> >                         return NULL;
> >                 }
> >
> >                 pr("alloc size %" PRIu64 "B (+%zu), max %" PRIu64 "B\n",
> >                    oe->cur_alloc_size, size, oe->max_alloc_size);
> >
> >                 oe->cur_alloc_size += size;
> >
> > You can end up with oe->cur_alloc_size > oe->max_alloc_size in case
> > the max limit is
> > really low (< size_t size = sizeof (*oe->buffer) + MAX_SAMPLE_BUFFER *
> > sizeof(*new);
> > So I think to make sure you can never allocate more than the max, you
> > have to do:
> >
> >       size_t size = sizeof(*oe->buffer) MAX_SAMPLE_BUFFER * sizeof(*new);
> >        if (!list_empty(cache)) {
> >                 new = list_entry(cache->next, struct ordered_event, list);
> >                 list_del(&new->list);
> >         } else if (oe->buffer) {
> >                 new = oe->buffer + oe->buffer_idx;
> >                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
> >                         oe->buffer = NULL;
> >         } else if ((oe->cur_alloc_size + size) < oe->max_alloc_size) {
> >
> > Then you will never allocate more than the max.
> > I think with this change, we are okay.
> > Tested-by: Stephane Eranian <eranian@google.com>
>
> yep, makes sense.. something like below then
> I'll post it on top of the previous patch
>
Yes. This works.
Thanks.


> thanks,
> jirka
>
>
> ---
> diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
> index 87171e8fd70d..2d1d0f3c8f77 100644
> --- a/tools/perf/util/ordered-events.c
> +++ b/tools/perf/util/ordered-events.c
> @@ -101,6 +101,7 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>         struct list_head *cache = &oe->cache;
>         struct ordered_event *new = NULL;
>         union perf_event *new_event;
> +       size_t size;
>
>         new_event = dup_event(oe, event);
>         if (!new_event)
> @@ -133,6 +134,8 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>          * Removal of ordered event object moves it from events to
>          * the cache list.
>          */
> +       size = sizeof(*oe->buffer) + MAX_SAMPLE_BUFFER * sizeof(*new);
> +
>         if (!list_empty(cache)) {
>                 new = list_entry(cache->next, struct ordered_event, list);
>                 list_del(&new->list);
> @@ -140,10 +143,7 @@ static struct ordered_event *alloc_event(struct ordered_events *oe,
>                 new = &oe->buffer->event[oe->buffer_idx];
>                 if (++oe->buffer_idx == MAX_SAMPLE_BUFFER)
>                         oe->buffer = NULL;
> -       } else if (oe->cur_alloc_size < oe->max_alloc_size) {
> -               size_t size = sizeof(*oe->buffer) +
> -                             MAX_SAMPLE_BUFFER * sizeof(*new);
> -
> +       } else if ((oe->cur_alloc_size + size) < oe->max_alloc_size) {
>                 oe->buffer = malloc(size);
>                 if (!oe->buffer) {
>                         free_dup_event(oe, new_event);

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, back to index

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-08 22:33 [PATCH v2] perf ordered_events: fix crash in free_dup_event() Stephane Eranian
2018-08-09  8:07 ` Jiri Olsa
2018-08-10  8:21   ` Stephane Eranian
2018-08-10 11:54     ` Jiri Olsa
2018-08-13 13:04       ` [PATCH] perf tools: Add struct ordered_events_buffer layer Jiri Olsa
2018-08-14  7:14         ` Stephane Eranian
2018-08-15  8:48           ` Jiri Olsa
2018-08-27  9:28             ` [PATCHv2] " Jiri Olsa
2018-08-27 11:07               ` Namhyung Kim
2018-08-27 15:24               ` Stephane Eranian
2018-08-27 17:05                 ` [PATCHv3] " Jiri Olsa
2018-09-02 14:47                   ` Jiri Olsa
2018-09-04  2:37                     ` Stephane Eranian
2018-09-06 13:28                       ` Jiri Olsa
2018-09-06 15:04                         ` Stephane Eranian

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox