linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] perf hists: Changes on hists__{match,link} (v3)
@ 2012-12-06 15:09 Namhyung Kim
  2012-12-06 15:09 ` [PATCH 1/5] perf diff: Removing displacement output option Namhyung Kim
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Namhyung Kim @ 2012-12-06 15:09 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo; +Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML

Hi,

I rebased my series on Jiri's diff displacement removal patch.  As the
displacement logic is gone, no need to resort (by period) baseline
since the output is only aligned to a newer data (i.e. "other" hists).

I also pushed the branch onto my tree at:

git://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git perf/link-v3

Please take a look and give comments.  Any comments are welcome.

Thanks,
Namhyung


Jiri Olsa (1):
  perf diff: Removing displacement output option

Namhyung Kim (4):
  perf hists: Exchange order of comparing items when collapsing hists
  perf hists: Link hist entries before inserting to an output tree
  perf diff: Use internal rb tree for compute resort
  perf test: Add a test case for hists__{match,link}

 tools/perf/Documentation/perf-diff.txt |    4 -
 tools/perf/Makefile                    |    1 +
 tools/perf/builtin-diff.c              |  104 ++-----
 tools/perf/tests/builtin-test.c        |    4 +
 tools/perf/tests/hists_link.c          |  502 ++++++++++++++++++++++++++++++++
 tools/perf/tests/tests.h               |    1 +
 tools/perf/ui/hist.c                   |   25 --
 tools/perf/util/hist.c                 |   51 +++-
 tools/perf/util/hist.h                 |    1 -
 tools/perf/util/machine.h              |    1 +
 tools/perf/util/session.c              |    2 +-
 11 files changed, 580 insertions(+), 116 deletions(-)
 create mode 100644 tools/perf/tests/hists_link.c

-- 
1.7.9.2


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/5] perf diff: Removing displacement output option
  2012-12-06 15:09 [PATCH 0/5] perf hists: Changes on hists__{match,link} (v3) Namhyung Kim
@ 2012-12-06 15:09 ` Namhyung Kim
  2012-12-06 15:09 ` [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists Namhyung Kim
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Namhyung Kim @ 2012-12-06 15:09 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML,
	Arnaldo Carvalho de Melo, Corey Ashford, Frederic Weisbecker,
	Ingo Molnar, Paul Mackerras, Peter Zijlstra

From: Jiri Olsa <jolsa@redhat.com>

Removing displacement output option. It seems not very useful,
because it's possible and event more convenient to lookup related
symbol by name. Also the output value for both 'baseline' and
'new' data is quite apparent from diff output.

And above all it complicates hist code factoring ;)

Ditching out PERF_HPP__DISPL column with related output functions.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 tools/perf/Documentation/perf-diff.txt |    4 ----
 tools/perf/builtin-diff.c              |   29 +++++++----------------------
 tools/perf/ui/hist.c                   |   25 -------------------------
 tools/perf/util/hist.h                 |    1 -
 4 files changed, 7 insertions(+), 52 deletions(-)

diff --git a/tools/perf/Documentation/perf-diff.txt b/tools/perf/Documentation/perf-diff.txt
index 194f37d635df..5b3123d5721f 100644
--- a/tools/perf/Documentation/perf-diff.txt
+++ b/tools/perf/Documentation/perf-diff.txt
@@ -22,10 +22,6 @@ specified perf.data files.
 
 OPTIONS
 -------
--M::
---displacement::
-        Show position displacement relative to baseline.
-
 -D::
 --dump-raw-trace::
         Dump raw trace in ASCII.
diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index d869029fb75e..b2e7d39f099b 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -23,7 +23,6 @@ static char const *input_old = "perf.data.old",
 		  *input_new = "perf.data";
 static char	  diff__default_sort_order[] = "dso,symbol";
 static bool  force;
-static bool show_displacement;
 static bool show_period;
 static bool show_formula;
 static bool show_baseline_only;
@@ -296,9 +295,8 @@ static void insert_hist_entry_by_name(struct rb_root *root,
 	rb_insert_color(&he->rb_node, root);
 }
 
-static void hists__name_resort(struct hists *self, bool sort)
+static void hists__name_resort(struct hists *self)
 {
-	unsigned long position = 1;
 	struct rb_root tmp = RB_ROOT;
 	struct rb_node *next = rb_first(&self->entries);
 
@@ -306,16 +304,12 @@ static void hists__name_resort(struct hists *self, bool sort)
 		struct hist_entry *n = rb_entry(next, struct hist_entry, rb_node);
 
 		next = rb_next(&n->rb_node);
-		n->position = position++;
 
-		if (sort) {
-			rb_erase(&n->rb_node, &self->entries);
-			insert_hist_entry_by_name(&tmp, n);
-		}
+		rb_erase(&n->rb_node, &self->entries);
+		insert_hist_entry_by_name(&tmp, n);
 	}
 
-	if (sort)
-		self->entries = tmp;
+	self->entries = tmp;
 }
 
 static struct perf_evsel *evsel_match(struct perf_evsel *evsel,
@@ -339,12 +333,8 @@ static void perf_evlist__resort_hists(struct perf_evlist *evlist, bool name)
 
 		hists__output_resort(hists);
 
-		/*
-		 * The hists__name_resort only sets possition
-		 * if name is false.
-		 */
-		if (name || ((!name) && show_displacement))
-			hists__name_resort(hists, name);
+		if (name)
+			hists__name_resort(hists);
 	}
 }
 
@@ -549,8 +539,6 @@ static const char * const diff_usage[] = {
 static const struct option options[] = {
 	OPT_INCR('v', "verbose", &verbose,
 		    "be more verbose (show symbol address, etc)"),
-	OPT_BOOLEAN('M', "displacement", &show_displacement,
-		    "Show position displacement relative to baseline"),
 	OPT_BOOLEAN('b', "baseline-only", &show_baseline_only,
 		    "Show only items with match in baseline"),
 	OPT_CALLBACK('c', "compute", &compute,
@@ -585,7 +573,7 @@ static const struct option options[] = {
 static void ui_init(void)
 {
 	/*
-	 * Display baseline/delta/ratio/displacement/
+	 * Display baseline/delta/ratio
 	 * formula/periods columns.
 	 */
 	perf_hpp__column_enable(PERF_HPP__BASELINE);
@@ -604,9 +592,6 @@ static void ui_init(void)
 		BUG_ON(1);
 	};
 
-	if (show_displacement)
-		perf_hpp__column_enable(PERF_HPP__DISPL);
-
 	if (show_formula)
 		perf_hpp__column_enable(PERF_HPP__FORMULA);
 
diff --git a/tools/perf/ui/hist.c b/tools/perf/ui/hist.c
index 1785bab7adfd..1889c12ca81f 100644
--- a/tools/perf/ui/hist.c
+++ b/tools/perf/ui/hist.c
@@ -351,30 +351,6 @@ static int hpp__entry_wdiff(struct perf_hpp *hpp, struct hist_entry *he)
 	return scnprintf(hpp->buf, hpp->size, fmt, buf);
 }
 
-static int hpp__header_displ(struct perf_hpp *hpp)
-{
-	return scnprintf(hpp->buf, hpp->size, "Displ.");
-}
-
-static int hpp__width_displ(struct perf_hpp *hpp __maybe_unused)
-{
-	return 6;
-}
-
-static int hpp__entry_displ(struct perf_hpp *hpp,
-			    struct hist_entry *he)
-{
-	struct hist_entry *pair = hist_entry__next_pair(he);
-	long displacement = pair ? pair->position - he->position : 0;
-	const char *fmt = symbol_conf.field_sep ? "%s" : "%6.6s";
-	char buf[32] = " ";
-
-	if (displacement)
-		scnprintf(buf, sizeof(buf), "%+4ld", displacement);
-
-	return scnprintf(hpp->buf, hpp->size, fmt, buf);
-}
-
 static int hpp__header_formula(struct perf_hpp *hpp)
 {
 	const char *fmt = symbol_conf.field_sep ? "%s" : "%70s";
@@ -427,7 +403,6 @@ struct perf_hpp_fmt perf_hpp__format[] = {
 	HPP__PRINT_FNS(delta),
 	HPP__PRINT_FNS(ratio),
 	HPP__PRINT_FNS(wdiff),
-	HPP__PRINT_FNS(displ),
 	HPP__PRINT_FNS(formula)
 };
 
diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h
index c1b2fade8e70..5b3b0075be64 100644
--- a/tools/perf/util/hist.h
+++ b/tools/perf/util/hist.h
@@ -154,7 +154,6 @@ enum {
 	PERF_HPP__DELTA,
 	PERF_HPP__RATIO,
 	PERF_HPP__WEIGHTED_DIFF,
-	PERF_HPP__DISPL,
 	PERF_HPP__FORMULA,
 
 	PERF_HPP__MAX_INDEX
-- 
1.7.9.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists
  2012-12-06 15:09 [PATCH 0/5] perf hists: Changes on hists__{match,link} (v3) Namhyung Kim
  2012-12-06 15:09 ` [PATCH 1/5] perf diff: Removing displacement output option Namhyung Kim
@ 2012-12-06 15:09 ` Namhyung Kim
  2012-12-06 16:53   ` Jiri Olsa
  2012-12-06 15:09 ` [PATCH 3/5] perf hists: Link hist entries before inserting to an output tree Namhyung Kim
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2012-12-06 15:09 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Namhyung Kim,
	Stephane Eranian

From: Namhyung Kim <namhyung.kim@lge.com>

When comparing entries for collapsing put the given entry first, and
then the iterated entry.  This is not the case of hist_entry__cmp()
when called if given sort keys don't require collapsing.  So change
the order for the sake of consistency.  It will be required for
matching and/or linking multiple hist entries.

Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/util/hist.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index 82df1b26f0d4..d4471c21ed17 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -285,7 +285,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists,
 		parent = *p;
 		he = rb_entry(parent, struct hist_entry, rb_node_in);
 
-		cmp = hist_entry__cmp(entry, he);
+		cmp = hist_entry__cmp(he, entry);
 
 		if (!cmp) {
 			he_stat__add_period(&he->stat, period);
@@ -729,7 +729,7 @@ static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
 		parent = *p;
 		he = rb_entry(parent, struct hist_entry, rb_node);
 
-		cmp = hist_entry__cmp(pair, he);
+		cmp = hist_entry__cmp(he, pair);
 
 		if (!cmp)
 			goto out;
@@ -759,7 +759,7 @@ static struct hist_entry *hists__find_entry(struct hists *hists,
 
 	while (n) {
 		struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node);
-		int64_t cmp = hist_entry__cmp(he, iter);
+		int64_t cmp = hist_entry__cmp(iter, he);
 
 		if (cmp < 0)
 			n = n->rb_left;
-- 
1.7.9.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] perf hists: Link hist entries before inserting to an output tree
  2012-12-06 15:09 [PATCH 0/5] perf hists: Changes on hists__{match,link} (v3) Namhyung Kim
  2012-12-06 15:09 ` [PATCH 1/5] perf diff: Removing displacement output option Namhyung Kim
  2012-12-06 15:09 ` [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists Namhyung Kim
@ 2012-12-06 15:09 ` Namhyung Kim
  2012-12-06 16:25   ` Jiri Olsa
  2012-12-06 15:09 ` [PATCH 4/5] perf diff: Use internal rb tree for compute resort Namhyung Kim
  2012-12-06 15:09 ` [PATCH 5/5] perf test: Add a test case for hists__{match,link} Namhyung Kim
  4 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2012-12-06 15:09 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Namhyung Kim,
	Stephane Eranian

From: Namhyung Kim <namhyung.kim@lge.com>

For matching and/or linking hist entries, they need to be sorted by
given sort keys.  However current hists__match/link did this on the
output trees, so that the entries in the output tree need to be resort
before doing it.

This looks not so good since we have trees for collecting or
collapsing entries before passing them to an output tree and they're
already sorted by the given sort keys.  Since we don't need to print
anything at the time of matching/linking, we can use these internal
trees directly instead of bothering with double resort on the output
tree.

Its only user - at the time of this writing - perf diff can be easily
converted to use the internal tree and can save some lines too by
getting rid of unnecessary resorting codes.

Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/builtin-diff.c |   65 ++++++++++++---------------------------------
 tools/perf/util/hist.c    |   49 +++++++++++++++++++++++++---------
 2 files changed, 54 insertions(+), 60 deletions(-)

diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index b2e7d39f099b..044ad99dcc90 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -275,43 +275,6 @@ static struct perf_tool tool = {
 	.ordering_requires_timestamps = true,
 };
 
-static void insert_hist_entry_by_name(struct rb_root *root,
-				      struct hist_entry *he)
-{
-	struct rb_node **p = &root->rb_node;
-	struct rb_node *parent = NULL;
-	struct hist_entry *iter;
-
-	while (*p != NULL) {
-		parent = *p;
-		iter = rb_entry(parent, struct hist_entry, rb_node);
-		if (hist_entry__cmp(he, iter) < 0)
-			p = &(*p)->rb_left;
-		else
-			p = &(*p)->rb_right;
-	}
-
-	rb_link_node(&he->rb_node, parent, p);
-	rb_insert_color(&he->rb_node, root);
-}
-
-static void hists__name_resort(struct hists *self)
-{
-	struct rb_root tmp = RB_ROOT;
-	struct rb_node *next = rb_first(&self->entries);
-
-	while (next != NULL) {
-		struct hist_entry *n = rb_entry(next, struct hist_entry, rb_node);
-
-		next = rb_next(&n->rb_node);
-
-		rb_erase(&n->rb_node, &self->entries);
-		insert_hist_entry_by_name(&tmp, n);
-	}
-
-	self->entries = tmp;
-}
-
 static struct perf_evsel *evsel_match(struct perf_evsel *evsel,
 				      struct perf_evlist *evlist)
 {
@@ -324,30 +287,34 @@ static struct perf_evsel *evsel_match(struct perf_evsel *evsel,
 	return NULL;
 }
 
-static void perf_evlist__resort_hists(struct perf_evlist *evlist, bool name)
+static void perf_evlist__resort_hists(struct perf_evlist *evlist)
 {
 	struct perf_evsel *evsel;
 
 	list_for_each_entry(evsel, &evlist->entries, node) {
 		struct hists *hists = &evsel->hists;
 
-		hists__output_resort(hists);
-
-		if (name)
-			hists__name_resort(hists);
+		hists__collapse_resort(hists);
 	}
 }
 
 static void hists__baseline_only(struct hists *hists)
 {
-	struct rb_node *next = rb_first(&hists->entries);
+	struct rb_root *root;
+	struct rb_node *next;
+
+	if (sort__need_collapse)
+		root = &hists->entries_collapsed;
+	else
+		root = hists->entries_in;
 
+	next = rb_first(root);
 	while (next != NULL) {
-		struct hist_entry *he = rb_entry(next, struct hist_entry, rb_node);
+		struct hist_entry *he = rb_entry(next, struct hist_entry, rb_node_in);
 
-		next = rb_next(&he->rb_node);
+		next = rb_next(&he->rb_node_in);
 		if (!hist_entry__next_pair(he)) {
-			rb_erase(&he->rb_node, &hists->entries);
+			rb_erase(&he->rb_node_in, root);
 			hist_entry__free(he);
 		}
 	}
@@ -471,6 +438,8 @@ static void hists__process(struct hists *old, struct hists *new)
 	else
 		hists__link(new, old);
 
+	hists__output_resort(new);
+
 	if (sort_compute) {
 		hists__precompute(new);
 		hists__compute_resort(new);
@@ -505,8 +474,8 @@ static int __cmd_diff(void)
 	evlist_old = older->evlist;
 	evlist_new = newer->evlist;
 
-	perf_evlist__resort_hists(evlist_old, true);
-	perf_evlist__resort_hists(evlist_new, false);
+	perf_evlist__resort_hists(evlist_old);
+	perf_evlist__resort_hists(evlist_new);
 
 	list_for_each_entry(evsel, &evlist_new->entries, node) {
 		struct perf_evsel *evsel_old;
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index d4471c21ed17..b0c9952d7c3f 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -720,16 +720,24 @@ void hists__inc_nr_events(struct hists *hists, u32 type)
 static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
 						 struct hist_entry *pair)
 {
-	struct rb_node **p = &hists->entries.rb_node;
+	struct rb_root *root;
+	struct rb_node **p;
 	struct rb_node *parent = NULL;
 	struct hist_entry *he;
 	int cmp;
 
+	if (sort__need_collapse)
+		root = &hists->entries_collapsed;
+	else
+		root = hists->entries_in;
+
+	p = &root->rb_node;
+
 	while (*p != NULL) {
 		parent = *p;
-		he = rb_entry(parent, struct hist_entry, rb_node);
+		he = rb_entry(parent, struct hist_entry, rb_node_in);
 
-		cmp = hist_entry__cmp(he, pair);
+		cmp = hist_entry__collapse(he, pair);
 
 		if (!cmp)
 			goto out;
@@ -744,8 +752,8 @@ static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
 	if (he) {
 		memset(&he->stat, 0, sizeof(he->stat));
 		he->hists = hists;
-		rb_link_node(&he->rb_node, parent, p);
-		rb_insert_color(&he->rb_node, &hists->entries);
+		rb_link_node(&he->rb_node_in, parent, p);
+		rb_insert_color(&he->rb_node_in, root);
 		hists__inc_nr_entries(hists, he);
 	}
 out:
@@ -755,11 +763,16 @@ out:
 static struct hist_entry *hists__find_entry(struct hists *hists,
 					    struct hist_entry *he)
 {
-	struct rb_node *n = hists->entries.rb_node;
+	struct rb_node *n;
+
+	if (sort__need_collapse)
+		n = hists->entries_collapsed.rb_node;
+	else
+		n = hists->entries_in->rb_node;
 
 	while (n) {
-		struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node);
-		int64_t cmp = hist_entry__cmp(iter, he);
+		struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node_in);
+		int64_t cmp = hist_entry__collapse(iter, he);
 
 		if (cmp < 0)
 			n = n->rb_left;
@@ -777,11 +790,17 @@ static struct hist_entry *hists__find_entry(struct hists *hists,
  */
 void hists__match(struct hists *leader, struct hists *other)
 {
+	struct rb_root *root;
 	struct rb_node *nd;
 	struct hist_entry *pos, *pair;
 
-	for (nd = rb_first(&leader->entries); nd; nd = rb_next(nd)) {
-		pos  = rb_entry(nd, struct hist_entry, rb_node);
+	if (sort__need_collapse)
+		root = &leader->entries_collapsed;
+	else
+		root = leader->entries_in;
+
+	for (nd = rb_first(root); nd; nd = rb_next(nd)) {
+		pos  = rb_entry(nd, struct hist_entry, rb_node_in);
 		pair = hists__find_entry(other, pos);
 
 		if (pair)
@@ -796,11 +815,17 @@ void hists__match(struct hists *leader, struct hists *other)
  */
 int hists__link(struct hists *leader, struct hists *other)
 {
+	struct rb_root *root;
 	struct rb_node *nd;
 	struct hist_entry *pos, *pair;
 
-	for (nd = rb_first(&other->entries); nd; nd = rb_next(nd)) {
-		pos = rb_entry(nd, struct hist_entry, rb_node);
+	if (sort__need_collapse)
+		root = &other->entries_collapsed;
+	else
+		root = other->entries_in;
+
+	for (nd = rb_first(root); nd; nd = rb_next(nd)) {
+		pos = rb_entry(nd, struct hist_entry, rb_node_in);
 
 		if (!hist_entry__has_pairs(pos)) {
 			pair = hists__add_dummy_entry(leader, pos);
-- 
1.7.9.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] perf diff: Use internal rb tree for compute resort
  2012-12-06 15:09 [PATCH 0/5] perf hists: Changes on hists__{match,link} (v3) Namhyung Kim
                   ` (2 preceding siblings ...)
  2012-12-06 15:09 ` [PATCH 3/5] perf hists: Link hist entries before inserting to an output tree Namhyung Kim
@ 2012-12-06 15:09 ` Namhyung Kim
  2012-12-06 16:51   ` Jiri Olsa
  2012-12-06 15:09 ` [PATCH 5/5] perf test: Add a test case for hists__{match,link} Namhyung Kim
  4 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2012-12-06 15:09 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Namhyung Kim,
	Stephane Eranian

From: Namhyung Kim <namhyung.kim@lge.com>

There's no reason to run hists_compute_resort() using output tree.
Convert it to use internal tree so that it can remove unnecessary
_output_resort.

Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/builtin-diff.c |   26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index 044ad99dcc90..f66968e9f853 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -414,19 +414,25 @@ static void insert_hist_entry_by_compute(struct rb_root *root,
 
 static void hists__compute_resort(struct hists *hists)
 {
-	struct rb_root tmp = RB_ROOT;
-	struct rb_node *next = rb_first(&hists->entries);
+	struct rb_root *root;
+	struct rb_node *next;
+
+	if (sort__need_collapse)
+		root = &hists->entries_collapsed;
+	else
+		root = hists->entries_in;
+
+	hists->entries = RB_ROOT;
+	next = rb_first(root);
 
 	while (next != NULL) {
-		struct hist_entry *he = rb_entry(next, struct hist_entry, rb_node);
+		struct hist_entry *he;
 
-		next = rb_next(&he->rb_node);
+		he = rb_entry(next, struct hist_entry, rb_node_in);
+		next = rb_next(&he->rb_node_in);
 
-		rb_erase(&he->rb_node, &hists->entries);
-		insert_hist_entry_by_compute(&tmp, he, compute);
+		insert_hist_entry_by_compute(&hists->entries, he, compute);
 	}
-
-	hists->entries = tmp;
 }
 
 static void hists__process(struct hists *old, struct hists *new)
@@ -438,11 +444,11 @@ static void hists__process(struct hists *old, struct hists *new)
 	else
 		hists__link(new, old);
 
-	hists__output_resort(new);
-
 	if (sort_compute) {
 		hists__precompute(new);
 		hists__compute_resort(new);
+	} else {
+		hists__output_resort(new);
 	}
 
 	hists__fprintf(new, true, 0, 0, stdout);
-- 
1.7.9.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] perf test: Add a test case for hists__{match,link}
  2012-12-06 15:09 [PATCH 0/5] perf hists: Changes on hists__{match,link} (v3) Namhyung Kim
                   ` (3 preceding siblings ...)
  2012-12-06 15:09 ` [PATCH 4/5] perf diff: Use internal rb tree for compute resort Namhyung Kim
@ 2012-12-06 15:09 ` Namhyung Kim
  4 siblings, 0 replies; 14+ messages in thread
From: Namhyung Kim @ 2012-12-06 15:09 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Namhyung Kim,
	Stephane Eranian

From: Namhyung Kim <namhyung.kim@lge.com>

As they are used from diff and event group report, add a test case to
verify their behaviors.

In this test I made a fake machine and two evsel.  Each evsel got 10
samples (so hist entries) - 5 are common and the rests are not.  So
after hists__match() both of them will have 5 entries with pair set.

And the second evsel has a collapsed entry so that the total number is
9 - I made it in order to simulate more realistic case.  Thus after
hists__link the first entry will have 14 entries - 5 are common (w/
pair), 5 are unmatch (w/o pair) and 4 are dummy (w/ pair).  And the
second entry will have 9 entries all have its pair.

Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 tools/perf/Makefile             |    1 +
 tools/perf/tests/builtin-test.c |    4 +
 tools/perf/tests/hists_link.c   |  502 +++++++++++++++++++++++++++++++++++++++
 tools/perf/tests/tests.h        |    1 +
 tools/perf/util/machine.h       |    1 +
 tools/perf/util/session.c       |    2 +-
 6 files changed, 510 insertions(+), 1 deletion(-)
 create mode 100644 tools/perf/tests/hists_link.c

diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index 75785bb98c30..3628065cda20 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -460,6 +460,7 @@ LIB_OBJS += $(OUTPUT)tests/evsel-roundtrip-name.o
 LIB_OBJS += $(OUTPUT)tests/evsel-tp-sched.o
 LIB_OBJS += $(OUTPUT)tests/pmu.o
 LIB_OBJS += $(OUTPUT)tests/util.o
+LIB_OBJS += $(OUTPUT)tests/hists_link.o
 
 BUILTIN_OBJS += $(OUTPUT)builtin-annotate.o
 BUILTIN_OBJS += $(OUTPUT)builtin-bench.o
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 186f67535494..479d10484a74 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -69,6 +69,10 @@ static struct test {
 		.func = test__attr,
 	},
 	{
+		.desc = "Test matching and linking mutliple hists",
+		.func = test__hists_link,
+	},
+	{
 		.func = NULL,
 	},
 };
diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
new file mode 100644
index 000000000000..0f1aae3b8a99
--- /dev/null
+++ b/tools/perf/tests/hists_link.c
@@ -0,0 +1,502 @@
+#include "perf.h"
+#include "tests.h"
+#include "debug.h"
+#include "symbol.h"
+#include "sort.h"
+#include "evsel.h"
+#include "evlist.h"
+#include "machine.h"
+#include "thread.h"
+#include "parse-events.h"
+
+static struct {
+	u32 pid;
+	const char *comm;
+} fake_threads[] = {
+	{ 100, "perf" },
+	{ 200, "perf" },
+	{ 300, "bash" },
+};
+
+static struct {
+	u32 pid;
+	u64 start;
+	const char *filename;
+} fake_mmap_info[] = {
+	{ 100, 0x40000, "perf" },
+	{ 100, 0x50000, "libc" },
+	{ 100, 0xf0000, "[kernel]" },
+	{ 200, 0x40000, "perf" },
+	{ 200, 0x50000, "libc" },
+	{ 200, 0xf0000, "[kernel]" },
+	{ 300, 0x40000, "bash" },
+	{ 300, 0x50000, "libc" },
+	{ 300, 0xf0000, "[kernel]" },
+};
+
+struct fake_sym {
+	u64 start;
+	u64 length;
+	const char *name;
+};
+
+static struct fake_sym perf_syms[] = {
+	{ 700, 100, "main" },
+	{ 800, 100, "run_command" },
+	{ 900, 100, "cmd_record" },
+};
+
+static struct fake_sym bash_syms[] = {
+	{ 700, 100, "main" },
+	{ 800, 100, "xmalloc" },
+	{ 900, 100, "xfree" },
+};
+
+static struct fake_sym libc_syms[] = {
+	{ 700, 100, "malloc" },
+	{ 800, 100, "free" },
+	{ 900, 100, "realloc" },
+};
+
+static struct fake_sym kernel_syms[] = {
+	{ 700, 100, "schedule" },
+	{ 800, 100, "page_fault" },
+	{ 900, 100, "sys_perf_event_open" },
+};
+
+static struct {
+	const char *dso_name;
+	struct fake_sym *syms;
+	size_t nr_syms;
+} fake_symbols[] = {
+	{ "perf", perf_syms, ARRAY_SIZE(perf_syms) },
+	{ "bash", bash_syms, ARRAY_SIZE(bash_syms) },
+	{ "libc", libc_syms, ARRAY_SIZE(libc_syms) },
+	{ "[kernel]", kernel_syms, ARRAY_SIZE(kernel_syms) },
+};
+
+static struct machine *setup_fake_machine(void)
+{
+	struct rb_root machine_root = RB_ROOT;
+	struct machine *machine;
+	size_t i;
+
+	machine = machines__findnew(&machine_root, HOST_KERNEL_ID);
+	if (machine == NULL) {
+		pr_debug("Not enough memory for machine setup\n");
+		return NULL;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(fake_threads); i++) {
+		struct thread *thread;
+
+		thread = machine__findnew_thread(machine, fake_threads[i].pid);
+		if (thread == NULL)
+			goto out;
+
+		thread__set_comm(thread, fake_threads[i].comm);
+	}
+
+	for (i = 0; i < ARRAY_SIZE(fake_mmap_info); i++) {
+		union perf_event fake_mmap_event = {
+			.mmap = {
+				.header = { .misc = PERF_RECORD_MISC_USER, },
+				.pid = fake_mmap_info[i].pid,
+				.start = fake_mmap_info[i].start,
+				.len = 0x1000ULL,
+				.pgoff = 0ULL,
+			},
+		};
+
+		strcpy(fake_mmap_event.mmap.filename,
+		       fake_mmap_info[i].filename);
+
+		machine__process_mmap_event(machine, &fake_mmap_event);
+	}
+
+	for (i = 0; i < ARRAY_SIZE(fake_symbols); i++) {
+		size_t k;
+		struct dso *dso;
+
+		dso = __dsos__findnew(&machine->user_dsos,
+				      fake_symbols[i].dso_name);
+		if (dso == NULL)
+			goto out;
+
+		/* emulate dso__load() */
+		dso__set_loaded(dso, MAP__FUNCTION);
+
+		for (k = 0; k < fake_symbols[i].nr_syms; k++) {
+			struct symbol *sym;
+			struct fake_sym *fsym = &fake_symbols[i].syms[k];
+
+			sym = symbol__new(fsym->start, fsym->length,
+					  STB_GLOBAL, fsym->name);
+			if (sym == NULL)
+				goto out;
+
+			symbols__insert(&dso->symbols[MAP__FUNCTION], sym);
+		}
+	}
+
+	return machine;
+
+out:
+	pr_debug("Not enough memory for machine setup\n");
+	machine__delete_threads(machine);
+	machine__delete(machine);
+	return NULL;
+}
+
+struct sample {
+	u32 pid;
+	u64 ip;
+	struct thread *thread;
+	struct map *map;
+	struct symbol *sym;
+};
+
+static struct sample fake_common_samples[] = {
+	/* perf [kernel] schedule() */
+	{ .pid = 100, .ip = 0xf0000 + 700, },
+	/* perf [perf]   main() */
+	{ .pid = 200, .ip = 0x40000 + 700, },
+	/* perf [perf]   cmd_record() */
+	{ .pid = 200, .ip = 0x40000 + 900, },
+	/* bash [bash]   xmalloc() */
+	{ .pid = 300, .ip = 0x40000 + 800, },
+	/* bash [libc]   malloc() */
+	{ .pid = 300, .ip = 0x50000 + 700, },
+};
+
+static struct sample fake_samples[][5] = {
+	{
+		/* perf [perf]   run_command() */
+		{ .pid = 100, .ip = 0x40000 + 800, },
+		/* perf [libc]   malloc() */
+		{ .pid = 100, .ip = 0x50000 + 700, },
+		/* perf [kernel] page_fault() */
+		{ .pid = 100, .ip = 0xf0000 + 800, },
+		/* perf [kernel] sys_perf_event_open() */
+		{ .pid = 200, .ip = 0xf0000 + 900, },
+		/* bash [libc]   free() */
+		{ .pid = 300, .ip = 0x50000 + 800, },
+	},
+	{
+		/* perf [libc]   free() */
+		{ .pid = 200, .ip = 0x50000 + 800, },
+		/* bash [libc]   malloc() */
+		{ .pid = 300, .ip = 0x50000 + 700, }, /* will be merged */
+		/* bash [bash]   xfee() */
+		{ .pid = 300, .ip = 0x40000 + 900, },
+		/* bash [libc]   realloc() */
+		{ .pid = 300, .ip = 0x50000 + 900, },
+		/* bash [kernel] page_fault() */
+		{ .pid = 300, .ip = 0xf0000 + 800, },
+	},
+};
+
+static int add_hist_entries(struct perf_evlist *evlist, struct machine *machine)
+{
+	struct perf_evsel *evsel;
+	struct addr_location al;
+	struct hist_entry *he;
+	struct perf_sample sample = { .cpu = 0, };
+	size_t i = 0, k;
+
+	/*
+	 * each evsel will have 10 samples - 5 common and 5 distinct.
+	 * However the second evsel also has a collapsed entry for
+	 * "bash [libc] malloc" so total 9 entries will be in the tree.
+	 */
+	list_for_each_entry(evsel, &evlist->entries, node) {
+		for (k = 0; k < ARRAY_SIZE(fake_common_samples); k++) {
+			const union perf_event event = {
+				.ip = {
+					.header = {
+						.misc = PERF_RECORD_MISC_USER,
+					},
+					.pid = fake_common_samples[k].pid,
+					.ip  = fake_common_samples[k].ip,
+				},
+			};
+
+			if (perf_event__preprocess_sample(&event, machine, &al,
+							  &sample, 0) < 0)
+				goto out;
+
+			he = __hists__add_entry(&evsel->hists, &al, NULL, 1);
+			if (he == NULL)
+				goto out;
+
+			fake_common_samples[k].thread = al.thread;
+			fake_common_samples[k].map = al.map;
+			fake_common_samples[k].sym = al.sym;
+		}
+
+		for (k = 0; k < ARRAY_SIZE(fake_samples[i]); k++) {
+			const union perf_event event = {
+				.ip = {
+					.header = {
+						.misc = PERF_RECORD_MISC_USER,
+					},
+					.pid = fake_samples[i][k].pid,
+					.ip  = fake_samples[i][k].ip,
+				},
+			};
+
+			if (perf_event__preprocess_sample(&event, machine, &al,
+							  &sample, 0) < 0)
+				goto out;
+
+			he = __hists__add_entry(&evsel->hists, &al, NULL, 1);
+			if (he == NULL)
+				goto out;
+
+			fake_samples[i][k].thread = al.thread;
+			fake_samples[i][k].map = al.map;
+			fake_samples[i][k].sym = al.sym;
+		}
+		i++;
+	}
+
+	return 0;
+
+out:
+	pr_debug("Not enough memory for adding a hist entry\n");
+	return -1;
+}
+
+static int find_sample(struct sample *samples, size_t nr_samples,
+		       struct thread *t, struct map *m, struct symbol *s)
+{
+	while (nr_samples--) {
+		if (samples->thread == t && samples->map == m &&
+		    samples->sym == s)
+			return 1;
+		samples++;
+	}
+	return 0;
+}
+
+static int __validate_match(struct hists *hists)
+{
+	size_t count = 0;
+	struct rb_root *root;
+	struct rb_node *node;
+
+	/*
+	 * Only entries from fake_common_samples should have a pair.
+	 */
+	if (sort__need_collapse)
+		root = &hists->entries_collapsed;
+	else
+		root = hists->entries_in;
+
+	node = rb_first(root);
+	while (node) {
+		struct hist_entry *he;
+
+		he = rb_entry(node, struct hist_entry, rb_node_in);
+
+		if (hist_entry__has_pairs(he)) {
+			if (find_sample(fake_common_samples,
+					ARRAY_SIZE(fake_common_samples),
+					he->thread, he->ms.map, he->ms.sym)) {
+				count++;
+			} else {
+				pr_debug("Can't find the matched entry\n");
+				return -1;
+			}
+		}
+
+		node = rb_next(node);
+	}
+
+	if (count != ARRAY_SIZE(fake_common_samples)) {
+		pr_debug("Invalid count for matched entries: %zd of %zd\n",
+			 count, ARRAY_SIZE(fake_common_samples));
+		return -1;
+	}
+
+	return 0;
+}
+
+static int validate_match(struct hists *leader, struct hists *other)
+{
+	return __validate_match(leader) || __validate_match(other);
+}
+
+static int __validate_link(struct hists *hists, int idx)
+{
+	size_t count = 0;
+	size_t count_pair = 0;
+	size_t count_dummy = 0;
+	struct rb_root *root;
+	struct rb_node *node;
+
+	/*
+	 * Leader hists (idx = 0) will have dummy entries from other,
+	 * and some entries will have no pair.  However every entry
+	 * in other hists should have (dummy) pair.
+	 */
+	if (sort__need_collapse)
+		root = &hists->entries_collapsed;
+	else
+		root = hists->entries_in;
+
+	node = rb_first(root);
+	while (node) {
+		struct hist_entry *he;
+
+		he = rb_entry(node, struct hist_entry, rb_node_in);
+
+		if (hist_entry__has_pairs(he)) {
+			if (!find_sample(fake_common_samples,
+					 ARRAY_SIZE(fake_common_samples),
+					 he->thread, he->ms.map, he->ms.sym) &&
+			    !find_sample(fake_samples[idx],
+					 ARRAY_SIZE(fake_samples[idx]),
+					 he->thread, he->ms.map, he->ms.sym)) {
+				count_dummy++;
+			}
+			count_pair++;
+		} else if (idx) {
+			pr_debug("A entry from the other hists should have pair\n");
+			return -1;
+		}
+
+		count++;
+		node = rb_next(node);
+	}
+
+	/*
+	 * Note that we have a entry collapsed in the other (idx = 1) hists.
+	 */
+	if (idx == 0) {
+		if (count_dummy != ARRAY_SIZE(fake_samples[1]) - 1) {
+			pr_debug("Invalid count of dummy entries: %zd of %zd\n",
+				 count_dummy, ARRAY_SIZE(fake_samples[1]) - 1);
+			return -1;
+		}
+		if (count != count_pair + ARRAY_SIZE(fake_samples[0])) {
+			pr_debug("Invalid count of total leader entries: %zd of %zd\n",
+				 count, count_pair + ARRAY_SIZE(fake_samples[0]));
+			return -1;
+		}
+	} else {
+		if (count != count_pair) {
+			pr_debug("Invalid count of total other entries: %zd of %zd\n",
+				 count, count_pair);
+			return -1;
+		}
+		if (count_dummy > 0) {
+			pr_debug("Other hists should not have dummy entries: %zd\n",
+				 count_dummy);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int validate_link(struct hists *leader, struct hists *other)
+{
+	return __validate_link(leader, 0) || __validate_link(other, 1);
+}
+
+static void print_hists(struct hists *hists)
+{
+	int i = 0;
+	struct rb_root *root;
+	struct rb_node *node;
+
+	if (sort__need_collapse)
+		root = &hists->entries_collapsed;
+	else
+		root = hists->entries_in;
+
+	pr_info("----- %s --------\n", __func__);
+	node = rb_first(root);
+	while (node) {
+		struct hist_entry *he;
+
+		he = rb_entry(node, struct hist_entry, rb_node_in);
+
+		pr_info("%2d: entry: %-8s [%-8s] %20s: period = %"PRIu64"\n",
+			i, he->thread->comm, he->ms.map->dso->short_name,
+			he->ms.sym->name, he->stat.period);
+
+		i++;
+		node = rb_next(node);
+	}
+}
+
+int test__hists_link(void)
+{
+	int err = -1;
+	struct machine *machine = NULL;
+	struct perf_evsel *evsel, *first;
+        struct perf_evlist *evlist = perf_evlist__new(NULL, NULL);
+
+	if (evlist == NULL)
+                return -ENOMEM;
+
+	err = parse_events(evlist, "cpu-clock", 0);
+	if (err)
+		goto out;
+	err = parse_events(evlist, "task-clock", 0);
+	if (err)
+		goto out;
+
+	/* default sort order (comm,dso,sym) will be used */
+	setup_sorting(NULL, NULL);
+
+	/* setup threads/dso/map/symbols also */
+	machine = setup_fake_machine();
+	if (!machine)
+		goto out;
+
+	if (verbose > 1)
+		machine__fprintf(machine, stderr);
+
+	/* process sample events */
+	err = add_hist_entries(evlist, machine);
+	if (err < 0)
+		goto out;
+
+	list_for_each_entry(evsel, &evlist->entries, node) {
+		hists__collapse_resort(&evsel->hists);
+
+		if (verbose > 2)
+			print_hists(&evsel->hists);
+	}
+
+	first = perf_evlist__first(evlist);
+	evsel = perf_evlist__last(evlist);
+
+	/* match common entries */
+	hists__match(&first->hists, &evsel->hists);
+	err = validate_match(&first->hists, &evsel->hists);
+	if (err)
+		goto out;
+
+	/* link common and/or dummy entries */
+	hists__link(&first->hists, &evsel->hists);
+	err = validate_link(&first->hists, &evsel->hists);
+	if (err)
+		goto out;
+
+	err = 0;
+
+out:
+	/* tear down everything */
+	perf_evlist__delete(evlist);
+
+	if (machine) {
+		machine__delete_threads(machine);
+		machine__delete(machine);
+	}
+
+	return err;
+}
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index fc121edab016..eddf1ca8cec9 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -15,6 +15,7 @@ int test__pmu(void);
 int test__attr(void);
 int test__dso_data(void);
 int test__parse_events(void);
+int test__hists_link(void);
 
 /* Util */
 int trace_event__id(const char *evname);
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index b7cde7467d55..166c93ccea22 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -89,6 +89,7 @@ static inline bool machine__is_host(struct machine *machine)
 
 struct thread *machine__findnew_thread(struct machine *machine, pid_t pid);
 void machine__remove_thread(struct machine *machine, struct thread *th);
+void machine__delete_threads(struct machine *machine);
 
 size_t machine__fprintf(struct machine *machine, FILE *fp);
 
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index aa5e58255cba..3f25862e3ab9 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -177,7 +177,7 @@ static void perf_session__delete_dead_threads(struct perf_session *session)
 	machine__delete_dead_threads(&session->host_machine);
 }
 
-static void machine__delete_threads(struct machine *self)
+void machine__delete_threads(struct machine *self)
 {
 	struct rb_node *nd = rb_first(&self->threads);
 
-- 
1.7.9.2


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/5] perf hists: Link hist entries before inserting to an output tree
  2012-12-06 15:09 ` [PATCH 3/5] perf hists: Link hist entries before inserting to an output tree Namhyung Kim
@ 2012-12-06 16:25   ` Jiri Olsa
  2012-12-07  8:45     ` Namhyung Kim
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2012-12-06 16:25 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, LKML,
	Namhyung Kim, Stephane Eranian

On Fri, Dec 07, 2012 at 12:09:39AM +0900, Namhyung Kim wrote:
> From: Namhyung Kim <namhyung.kim@lge.com>
> 
> For matching and/or linking hist entries, they need to be sorted by
> given sort keys.  However current hists__match/link did this on the
> output trees, so that the entries in the output tree need to be resort
> before doing it.
> 
> This looks not so good since we have trees for collecting or
> collapsing entries before passing them to an output tree and they're
> already sorted by the given sort keys.  Since we don't need to print
> anything at the time of matching/linking, we can use these internal
> trees directly instead of bothering with double resort on the output
> tree.

this patch also makes diff working over collapsed entries,
which was not possible before.. nice ;)

outputs like:

[jolsa@krava perf]$ ./perf diff  -s comm
# Event 'cycles:u'
#
# Baseline    Delta          Command
# ........  .......  ...............
#
     5.24%  +68.96%          firefox
     2.34%   +5.66%                X
    48.51%  -41.53%             mocp
    14.98%  -11.53%            skype
    18.01%  -15.35%  plugin-containe
     1.03%   +1.48%            xchat
     5.54%   -4.61%          gkrellm
     1.41%   -0.93%            xterm
             +0.33%  xmonad-x86_64-l
             +0.23%              vim
             +0.07%     xscreensaver
     0.19%   -0.14%          swapper
     1.00%   -0.97%   NetworkManager
     0.28%   -0.25%              ssh
     0.11%   -0.09%            sleep
     0.84%   -0.83%      dbus-daemon
     0.02%   -0.01%             perf
     0.40%   -0.40%   wpa_supplicant
     0.05%   -0.05%              gpm
     0.04%   -0.04%            crond


small nitpick below, otherwise

Acked-by: Jiri Olsa <jolsa@redhat.com>


> 
> Its only user - at the time of this writing - perf diff can be easily
> converted to use the internal tree and can save some lines too by
> getting rid of unnecessary resorting codes.
> 
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Stephane Eranian <eranian@google.com>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  tools/perf/builtin-diff.c |   65 ++++++++++++---------------------------------
>  tools/perf/util/hist.c    |   49 +++++++++++++++++++++++++---------
>  2 files changed, 54 insertions(+), 60 deletions(-)
> 
> diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
> index b2e7d39f099b..044ad99dcc90 100644
> --- a/tools/perf/builtin-diff.c
> +++ b/tools/perf/builtin-diff.c
> @@ -275,43 +275,6 @@ static struct perf_tool tool = {
>  	return NULL;A

SNIP

>  }
>  
> -static void perf_evlist__resort_hists(struct perf_evlist *evlist, bool name)
> +static void perf_evlist__resort_hists(struct perf_evlist *evlist)

this could be called 'perf_evlist__collapse_resort' now

>  {
>  	struct perf_evsel *evsel;
>  
>  	list_for_each_entry(evsel, &evlist->entries, node) {
>  		struct hists *hists = &evsel->hists;
>  
> -		hists__output_resort(hists);
> -
> -		if (name)
> -			hists__name_resort(hists);
> +		hists__collapse_resort(hists);
>  	}
>  }

SNIP

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/5] perf diff: Use internal rb tree for compute resort
  2012-12-06 15:09 ` [PATCH 4/5] perf diff: Use internal rb tree for compute resort Namhyung Kim
@ 2012-12-06 16:51   ` Jiri Olsa
  2012-12-07  8:53     ` Namhyung Kim
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2012-12-06 16:51 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, LKML,
	Namhyung Kim, Stephane Eranian

On Fri, Dec 07, 2012 at 12:09:40AM +0900, Namhyung Kim wrote:
> From: Namhyung Kim <namhyung.kim@lge.com>
> 
> There's no reason to run hists_compute_resort() using output tree.
> Convert it to use internal tree so that it can remove unnecessary
> _output_resort.

I have another patch in queue ommiting dummy entries to display
number in the compute column, so we don't have confusing 'sorted'
outputs like:

[jolsa@krava perf]$ ./perf diff -c+delta
# Event 'cycles:u'
#
# Baseline    Delta  Shared Object                      Symbol
# ........  .......  .............  ..........................
#
    17.92%  -17.92%  libc-2.15.so   [.] _IO_link_in           
            +77.54%  libc-2.15.so   [.] __fprintf_chk         
    15.64%  -15.64%  libc-2.15.so   [.] _dl_addr              
     0.08%   +0.61%  ld-2.15.so     [.] _start                
    12.16%  -12.16%  ld-2.15.so     [.] dl_main               
    15.39%  -15.39%  ld-2.15.so     [.] _dl_check_map_versions
    38.81%  -17.04%  [kernel.kallsyms]  [k] page_fault            

just in case anyone actualy tries and wonders ;)


We need following change as well, because output resort does
also col width recalc. Please add it if you respin, or I can
send it later.

other than that:

Acked-by: Jiri Olsa <jolsa@redhat.com>

thanks,
jirka

---
diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index f66968e..6f56f78 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -425,12 +425,15 @@ static void hists__compute_resort(struct hists *hists)
 	hists->entries = RB_ROOT;
 	next = rb_first(root);
 
+	hists__reset_col_len(hists);
+
 	while (next != NULL) {
 		struct hist_entry *he;
 
 		he = rb_entry(next, struct hist_entry, rb_node_in);
 		next = rb_next(&he->rb_node_in);
 
+		hists__calc_col_len(hists, he);
 		insert_hist_entry_by_compute(&hists->entries, he, compute);
 	}
 }

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists
  2012-12-06 15:09 ` [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists Namhyung Kim
@ 2012-12-06 16:53   ` Jiri Olsa
  2012-12-06 19:09     ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 14+ messages in thread
From: Jiri Olsa @ 2012-12-06 16:53 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, LKML,
	Namhyung Kim, Stephane Eranian

On Fri, Dec 07, 2012 at 12:09:38AM +0900, Namhyung Kim wrote:
> From: Namhyung Kim <namhyung.kim@lge.com>
> 
> When comparing entries for collapsing put the given entry first, and
> then the iterated entry.  This is not the case of hist_entry__cmp()
> when called if given sort keys don't require collapsing.  So change
> the order for the sake of consistency.  It will be required for
> matching and/or linking multiple hist entries.

As discussed with Arnadlo, this change seems like changing the
sort order... could you ellaborate how it is usefull in future?

thanks,
jirka

> 
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Stephane Eranian <eranian@google.com>
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> ---
>  tools/perf/util/hist.c |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> index 82df1b26f0d4..d4471c21ed17 100644
> --- a/tools/perf/util/hist.c
> +++ b/tools/perf/util/hist.c
> @@ -285,7 +285,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists,
>  		parent = *p;
>  		he = rb_entry(parent, struct hist_entry, rb_node_in);
>  
> -		cmp = hist_entry__cmp(entry, he);
> +		cmp = hist_entry__cmp(he, entry);
>  
>  		if (!cmp) {
>  			he_stat__add_period(&he->stat, period);
> @@ -729,7 +729,7 @@ static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
>  		parent = *p;
>  		he = rb_entry(parent, struct hist_entry, rb_node);
>  
> -		cmp = hist_entry__cmp(pair, he);
> +		cmp = hist_entry__cmp(he, pair);
>  
>  		if (!cmp)
>  			goto out;
> @@ -759,7 +759,7 @@ static struct hist_entry *hists__find_entry(struct hists *hists,
>  
>  	while (n) {
>  		struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node);
> -		int64_t cmp = hist_entry__cmp(he, iter);
> +		int64_t cmp = hist_entry__cmp(iter, he);
>  
>  		if (cmp < 0)
>  			n = n->rb_left;
> -- 
> 1.7.9.2
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists
  2012-12-06 16:53   ` Jiri Olsa
@ 2012-12-06 19:09     ` Arnaldo Carvalho de Melo
  2012-12-07  8:38       ` Namhyung Kim
  0 siblings, 1 reply; 14+ messages in thread
From: Arnaldo Carvalho de Melo @ 2012-12-06 19:09 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Namhyung Kim, Ingo Molnar, Peter Zijlstra, LKML, Namhyung Kim,
	Stephane Eranian

Em Thu, Dec 06, 2012 at 05:53:25PM +0100, Jiri Olsa escreveu:
> On Fri, Dec 07, 2012 at 12:09:38AM +0900, Namhyung Kim wrote:
> > From: Namhyung Kim <namhyung.kim@lge.com>
> > 
> > When comparing entries for collapsing put the given entry first, and
> > then the iterated entry.  This is not the case of hist_entry__cmp()
> > when called if given sort keys don't require collapsing.  So change
> > the order for the sake of consistency.  It will be required for
> > matching and/or linking multiple hist entries.
> 
> As discussed with Arnadlo, this change seems like changing the
> sort order... could you ellaborate how it is usefull in future?

In several places the order is (he, iter) then it became (iter, he),
something like that, so he inverted it for consistency, but then he
needs to invert in the cmp function too, unsure if this is worth the
trouble now, perhaps some comment placed in the right spot clarifies
things,

- Arnaldo
 
> thanks,
> jirka
> 
> > 
> > Cc: Jiri Olsa <jolsa@redhat.com>
> > Cc: Stephane Eranian <eranian@google.com>
> > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > ---
> >  tools/perf/util/hist.c |    6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> > 
> > diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> > index 82df1b26f0d4..d4471c21ed17 100644
> > --- a/tools/perf/util/hist.c
> > +++ b/tools/perf/util/hist.c
> > @@ -285,7 +285,7 @@ static struct hist_entry *add_hist_entry(struct hists *hists,
> >  		parent = *p;
> >  		he = rb_entry(parent, struct hist_entry, rb_node_in);
> >  
> > -		cmp = hist_entry__cmp(entry, he);
> > +		cmp = hist_entry__cmp(he, entry);
> >  
> >  		if (!cmp) {
> >  			he_stat__add_period(&he->stat, period);
> > @@ -729,7 +729,7 @@ static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
> >  		parent = *p;
> >  		he = rb_entry(parent, struct hist_entry, rb_node);
> >  
> > -		cmp = hist_entry__cmp(pair, he);
> > +		cmp = hist_entry__cmp(he, pair);
> >  
> >  		if (!cmp)
> >  			goto out;
> > @@ -759,7 +759,7 @@ static struct hist_entry *hists__find_entry(struct hists *hists,
> >  
> >  	while (n) {
> >  		struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node);
> > -		int64_t cmp = hist_entry__cmp(he, iter);
> > +		int64_t cmp = hist_entry__cmp(iter, he);
> >  
> >  		if (cmp < 0)
> >  			n = n->rb_left;
> > -- 
> > 1.7.9.2
> > 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists
  2012-12-06 19:09     ` Arnaldo Carvalho de Melo
@ 2012-12-07  8:38       ` Namhyung Kim
  2012-12-07 10:18         ` Jiri Olsa
  0 siblings, 1 reply; 14+ messages in thread
From: Namhyung Kim @ 2012-12-07  8:38 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Jiri Olsa, Ingo Molnar, Peter Zijlstra, LKML, Namhyung Kim,
	Stephane Eranian

Hi Arnaldo,

On Thu, 6 Dec 2012 16:09:20 -0300, Arnaldo Carvalho de Melo wrote:
> Em Thu, Dec 06, 2012 at 05:53:25PM +0100, Jiri Olsa escreveu:
>> On Fri, Dec 07, 2012 at 12:09:38AM +0900, Namhyung Kim wrote:
>> > From: Namhyung Kim <namhyung.kim@lge.com>
>> > 
>> > When comparing entries for collapsing put the given entry first, and
>> > then the iterated entry.  This is not the case of hist_entry__cmp()
>> > when called if given sort keys don't require collapsing.  So change
>> > the order for the sake of consistency.  It will be required for
>> > matching and/or linking multiple hist entries.
>> 
>> As discussed with Arnadlo, this change seems like changing the
>> sort order... could you ellaborate how it is usefull in future?
>
> In several places the order is (he, iter) then it became (iter, he),
> something like that, so he inverted it for consistency, but then he
> needs to invert in the cmp function too, unsure if this is worth the
> trouble now, perhaps some comment placed in the right spot clarifies
> things,

The point is that it needs to have a same order when comparing two
entries for both of inserting (add_hist_entry) and linking (hists__add_
dummy_entry and hists__find_entry).  This was simple when we used output
tree, because it's a single tree so that we can make sure that it use
the same order as of insertion.  But by using internal trees we should
select one between inserting (entries_in) and collapsing
(entries_collapsed) based on the sort keys given.

Unfortunately, inserting and collapsing used different order - (he, iter)
vs. (iter, he) - so we need to use corresponding (different) order for
match/link also.  That means that without this patch, we have to call
corresponding function with different order like following:


@@ -739,6 +739,10 @@ static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
 
                cmp = hist_entry__collapse(he, pair);
 
+               if (sort__need_collapse)
+                       cmp = hist_entry__collapse(he, pair);
+               else
+                       cmp = hist_entry__cmp(pair, he);
                if (!cmp)
                        goto out;
 
@@ -772,7 +776,12 @@ static struct hist_entry *hists__find_entry(struct hists *hists,
 
        while (n) {
                struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node_in);
-               int64_t cmp = hist_entry__collapse(iter, he);
+               int64_t cmp;
+
+               if (sort__need_collapse)
+                       cmp = hist_entry__collapse(iter, he);
+               else
+                       cmp = hist_entry__cmp(he, iter);
 
                if (cmp < 0)
                        n = n->rb_left;

It doesn't look good, especially hist_entry__collapse will be same as
hist_entry__cmp if 'sort__need_collapse' is false.  If we can make the
order consistent, it'd be converted to a sigle _collapse() call
without the conditional.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/5] perf hists: Link hist entries before inserting to an output tree
  2012-12-06 16:25   ` Jiri Olsa
@ 2012-12-07  8:45     ` Namhyung Kim
  0 siblings, 0 replies; 14+ messages in thread
From: Namhyung Kim @ 2012-12-07  8:45 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, LKML,
	Namhyung Kim, Stephane Eranian

Hi,

On Thu, 6 Dec 2012 17:25:43 +0100, Jiri Olsa wrote:
> On Fri, Dec 07, 2012 at 12:09:39AM +0900, Namhyung Kim wrote:
>> From: Namhyung Kim <namhyung.kim@lge.com>
>> 
>> For matching and/or linking hist entries, they need to be sorted by
>> given sort keys.  However current hists__match/link did this on the
>> output trees, so that the entries in the output tree need to be resort
>> before doing it.
>> 
>> This looks not so good since we have trees for collecting or
>> collapsing entries before passing them to an output tree and they're
>> already sorted by the given sort keys.  Since we don't need to print
>> anything at the time of matching/linking, we can use these internal
>> trees directly instead of bothering with double resort on the output
>> tree.
>
> this patch also makes diff working over collapsed entries,
> which was not possible before.. nice ;)
>
> outputs like:
>
> [jolsa@krava perf]$ ./perf diff  -s comm
> # Event 'cycles:u'
> #
> # Baseline    Delta          Command
> # ........  .......  ...............
> #
>      5.24%  +68.96%          firefox
>      2.34%   +5.66%                X
>     48.51%  -41.53%             mocp
>     14.98%  -11.53%            skype
>     18.01%  -15.35%  plugin-containe
>      1.03%   +1.48%            xchat
>      5.54%   -4.61%          gkrellm
>      1.41%   -0.93%            xterm
>              +0.33%  xmonad-x86_64-l
>              +0.23%              vim
>              +0.07%     xscreensaver
>      0.19%   -0.14%          swapper
>      1.00%   -0.97%   NetworkManager
>      0.28%   -0.25%              ssh
>      0.11%   -0.09%            sleep
>      0.84%   -0.83%      dbus-daemon
>      0.02%   -0.01%             perf
>      0.40%   -0.40%   wpa_supplicant
>      0.05%   -0.05%              gpm
>      0.04%   -0.04%            crond
>
>
> small nitpick below, otherwise
>
> Acked-by: Jiri Olsa <jolsa@redhat.com>

Thanks!

>
>
>> 
>> Its only user - at the time of this writing - perf diff can be easily
>> converted to use the internal tree and can save some lines too by
>> getting rid of unnecessary resorting codes.
>> 
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Stephane Eranian <eranian@google.com>
>> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
>> ---
>>  tools/perf/builtin-diff.c |   65 ++++++++++++---------------------------------
>>  tools/perf/util/hist.c    |   49 +++++++++++++++++++++++++---------
>>  2 files changed, 54 insertions(+), 60 deletions(-)
>> 
>> diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
>> index b2e7d39f099b..044ad99dcc90 100644
>> --- a/tools/perf/builtin-diff.c
>> +++ b/tools/perf/builtin-diff.c
>> @@ -275,43 +275,6 @@ static struct perf_tool tool = {
>>  	return NULL;A
>
> SNIP
>
>>  }
>>  
>> -static void perf_evlist__resort_hists(struct perf_evlist *evlist, bool name)
>> +static void perf_evlist__resort_hists(struct perf_evlist *evlist)
>
> this could be called 'perf_evlist__collapse_resort' now

Will change in the next spin.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/5] perf diff: Use internal rb tree for compute resort
  2012-12-06 16:51   ` Jiri Olsa
@ 2012-12-07  8:53     ` Namhyung Kim
  0 siblings, 0 replies; 14+ messages in thread
From: Namhyung Kim @ 2012-12-07  8:53 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, LKML,
	Namhyung Kim, Stephane Eranian

On Thu, 6 Dec 2012 17:51:36 +0100, Jiri Olsa wrote:
> On Fri, Dec 07, 2012 at 12:09:40AM +0900, Namhyung Kim wrote:
>> From: Namhyung Kim <namhyung.kim@lge.com>
>> 
>> There's no reason to run hists_compute_resort() using output tree.
>> Convert it to use internal tree so that it can remove unnecessary
>> _output_resort.
>
> I have another patch in queue ommiting dummy entries to display
> number in the compute column, so we don't have confusing 'sorted'
> outputs like:
>
> [jolsa@krava perf]$ ./perf diff -c+delta
> # Event 'cycles:u'
> #
> # Baseline    Delta  Shared Object                      Symbol
> # ........  .......  .............  ..........................
> #
>     17.92%  -17.92%  libc-2.15.so   [.] _IO_link_in           
>             +77.54%  libc-2.15.so   [.] __fprintf_chk         
>     15.64%  -15.64%  libc-2.15.so   [.] _dl_addr              
>      0.08%   +0.61%  ld-2.15.so     [.] _start                
>     12.16%  -12.16%  ld-2.15.so     [.] dl_main               
>     15.39%  -15.39%  ld-2.15.so     [.] _dl_check_map_versions
>     38.81%  -17.04%  [kernel.kallsyms]  [k] page_fault            
>
> just in case anyone actualy tries and wonders ;)

Sounds great!

>
> We need following change as well, because output resort does
> also col width recalc. Please add it if you respin, or I can
> send it later.

Okay, I'll resend a new version after getting a reply on the patch 2/5
from Arnaldo.  It'd be better if you send me a formal patch for this.

>
> other than that:
>
> Acked-by: Jiri Olsa <jolsa@redhat.com>

Thanks,
Namhyung

>
> ---
> diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
> index f66968e..6f56f78 100644
> --- a/tools/perf/builtin-diff.c
> +++ b/tools/perf/builtin-diff.c
> @@ -425,12 +425,15 @@ static void hists__compute_resort(struct hists *hists)
>  	hists->entries = RB_ROOT;
>  	next = rb_first(root);
>  
> +	hists__reset_col_len(hists);
> +
>  	while (next != NULL) {
>  		struct hist_entry *he;
>  
>  		he = rb_entry(next, struct hist_entry, rb_node_in);
>  		next = rb_next(&he->rb_node_in);
>  
> +		hists__calc_col_len(hists, he);
>  		insert_hist_entry_by_compute(&hists->entries, he, compute);
>  	}
>  }

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists
  2012-12-07  8:38       ` Namhyung Kim
@ 2012-12-07 10:18         ` Jiri Olsa
  0 siblings, 0 replies; 14+ messages in thread
From: Jiri Olsa @ 2012-12-07 10:18 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, Peter Zijlstra, LKML,
	Namhyung Kim, Stephane Eranian

On Fri, Dec 07, 2012 at 05:38:22PM +0900, Namhyung Kim wrote:
> Hi Arnaldo,
> 

SNIP

> @@ -739,6 +739,10 @@ static struct hist_entry *hists__add_dummy_entry(struct hists *hists,
>  
>                 cmp = hist_entry__collapse(he, pair);
>  
> +               if (sort__need_collapse)
> +                       cmp = hist_entry__collapse(he, pair);
> +               else
> +                       cmp = hist_entry__cmp(pair, he);
>                 if (!cmp)
>                         goto out;
>  
> @@ -772,7 +776,12 @@ static struct hist_entry *hists__find_entry(struct hists *hists,
>  
>         while (n) {
>                 struct hist_entry *iter = rb_entry(n, struct hist_entry, rb_node_in);
> -               int64_t cmp = hist_entry__collapse(iter, he);
> +               int64_t cmp;
> +
> +               if (sort__need_collapse)
> +                       cmp = hist_entry__collapse(iter, he);
> +               else
> +                       cmp = hist_entry__cmp(he, iter);
>  
>                 if (cmp < 0)
>                         n = n->rb_left;
> 
> It doesn't look good, especially hist_entry__collapse will be same as
> hist_entry__cmp if 'sort__need_collapse' is false.  If we can make the
> order consistent, it'd be converted to a sigle _collapse() call
> without the conditional.
> 
> Thanks,
> Namhyung

I've got non matching entries in diff after applying just patch 2/5,
and it's caused by missing he/iter swap in initial name resort in
insert_hist_entry_by_name function.

I understand that function goes away in your next patch while
add_hist_entry (swap ok) being the one doing initial name resort,
but still.. took me some time to figure this out ;)

jirka

---
diff --git a/tools/perf/builtin-diff.c b/tools/perf/builtin-diff.c
index b2e7d39..4dda6f4 100644
--- a/tools/perf/builtin-diff.c
+++ b/tools/perf/builtin-diff.c
@@ -285,7 +285,7 @@ static void insert_hist_entry_by_name(struct rb_root *root,
 	while (*p != NULL) {
 		parent = *p;
 		iter = rb_entry(parent, struct hist_entry, rb_node);
-		if (hist_entry__cmp(he, iter) < 0)
+		if (hist_entry__cmp(iter, he) < 0)
 			p = &(*p)->rb_left;
 		else
 			p = &(*p)->rb_right;

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2012-12-07 10:19 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-06 15:09 [PATCH 0/5] perf hists: Changes on hists__{match,link} (v3) Namhyung Kim
2012-12-06 15:09 ` [PATCH 1/5] perf diff: Removing displacement output option Namhyung Kim
2012-12-06 15:09 ` [PATCH 2/5] perf hists: Exchange order of comparing items when collapsing hists Namhyung Kim
2012-12-06 16:53   ` Jiri Olsa
2012-12-06 19:09     ` Arnaldo Carvalho de Melo
2012-12-07  8:38       ` Namhyung Kim
2012-12-07 10:18         ` Jiri Olsa
2012-12-06 15:09 ` [PATCH 3/5] perf hists: Link hist entries before inserting to an output tree Namhyung Kim
2012-12-06 16:25   ` Jiri Olsa
2012-12-07  8:45     ` Namhyung Kim
2012-12-06 15:09 ` [PATCH 4/5] perf diff: Use internal rb tree for compute resort Namhyung Kim
2012-12-06 16:51   ` Jiri Olsa
2012-12-07  8:53     ` Namhyung Kim
2012-12-06 15:09 ` [PATCH 5/5] perf test: Add a test case for hists__{match,link} Namhyung Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).