linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [GIT PULL 0/5] perf/core improvements and fixes
@ 2015-04-13 22:14 Arnaldo Carvalho de Melo
  2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-04-13 22:14 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Arnaldo Carvalho de Melo, David Ahern, He Kuang,
	Jiri Olsa, Joonsoo Kim, linux-mm, Masami Hiramatsu, Minchan Kim,
	Namhyung Kim, Peter Zijlstra, Steven Rostedt, Wang Nan,
	Arnaldo Carvalho de Melo

Hi Ingo,

	Please consider pulling,

Best regards,

- Arnaldo

The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:

  perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo

for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402:

  perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300)

----------------------------------------------------------------
perf/core improvements and fixes:

New features:

- Analyze page allocator events also in 'perf kmem' (Namhyung Kim)

User visible fixes:

- Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)

- lazy_line probe fixes in 'perf probe' (He Kuang)

Infrastructure:

- Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

----------------------------------------------------------------
He Kuang (3):
      perf probe: Set retprobe flag when probe in address-based alternative mode
      perf probe: Make --source avaiable when probe with lazy_line
      perf probe: Fix segfault when probe with lazy_line to file

Namhyung Kim (2):
      tracing, mm: Record pfn instead of pointer to struct page
      perf kmem: Analyze page allocator events also

 include/trace/events/filemap.h         |   8 +-
 include/trace/events/kmem.h            |  42 +--
 include/trace/events/vmscan.h          |   8 +-
 tools/perf/Documentation/perf-kmem.txt |   8 +-
 tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
 tools/perf/util/probe-event.c          |   3 +-
 tools/perf/util/probe-event.h          |   2 +
 tools/perf/util/probe-finder.c         |  20 +-
 8 files changed, 540 insertions(+), 51 deletions(-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo
@ 2015-04-13 22:14 ` Arnaldo Carvalho de Melo
  2017-07-31  7:43   ` Vlastimil Babka
  2015-04-13 22:14 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Arnaldo Carvalho de Melo
  2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu
  2 siblings, 1 reply; 16+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-04-13 22:14 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm, Arnaldo Carvalho de Melo

From: Namhyung Kim <namhyung@kernel.org>

The struct page is opaque for userspace tools, so it'd be better to save
pfn in order to identify page frames.

The textual output of $debugfs/tracing/trace file remains unchanged and
only raw (binary) data format is changed - but thanks to libtraceevent,
userspace tools which deal with the raw data (like perf and trace-cmd)
can parse the format easily.  So impact on the userspace will also be
minimal.

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Based-on-patch-by: Joonsoo Kim <js1304@gmail.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/1428298576-9785-3-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
 include/trace/events/filemap.h |  8 ++++----
 include/trace/events/kmem.h    | 42 +++++++++++++++++++++---------------------
 include/trace/events/vmscan.h  |  8 ++++----
 3 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
index 0421f49a20f7..42febb6bc1d5 100644
--- a/include/trace/events/filemap.h
+++ b/include/trace/events/filemap.h
@@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
 	TP_ARGS(page),
 
 	TP_STRUCT__entry(
-		__field(struct page *, page)
+		__field(unsigned long, pfn)
 		__field(unsigned long, i_ino)
 		__field(unsigned long, index)
 		__field(dev_t, s_dev)
 	),
 
 	TP_fast_assign(
-		__entry->page = page;
+		__entry->pfn = page_to_pfn(page);
 		__entry->i_ino = page->mapping->host->i_ino;
 		__entry->index = page->index;
 		if (page->mapping->host->i_sb)
@@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
 	TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu",
 		MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
 		__entry->i_ino,
-		__entry->page,
-		page_to_pfn(__entry->page),
+		pfn_to_page(__entry->pfn),
+		__entry->pfn,
 		__entry->index << PAGE_SHIFT)
 );
 
diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index 4ad10baecd4d..81ea59812117 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free,
 	TP_ARGS(page, order),
 
 	TP_STRUCT__entry(
-		__field(	struct page *,	page		)
+		__field(	unsigned long,	pfn		)
 		__field(	unsigned int,	order		)
 	),
 
 	TP_fast_assign(
-		__entry->page		= page;
+		__entry->pfn		= page_to_pfn(page);
 		__entry->order		= order;
 	),
 
 	TP_printk("page=%p pfn=%lu order=%d",
-			__entry->page,
-			page_to_pfn(__entry->page),
+			pfn_to_page(__entry->pfn),
+			__entry->pfn,
 			__entry->order)
 );
 
@@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched,
 	TP_ARGS(page, cold),
 
 	TP_STRUCT__entry(
-		__field(	struct page *,	page		)
+		__field(	unsigned long,	pfn		)
 		__field(	int,		cold		)
 	),
 
 	TP_fast_assign(
-		__entry->page		= page;
+		__entry->pfn		= page_to_pfn(page);
 		__entry->cold		= cold;
 	),
 
 	TP_printk("page=%p pfn=%lu order=0 cold=%d",
-			__entry->page,
-			page_to_pfn(__entry->page),
+			pfn_to_page(__entry->pfn),
+			__entry->pfn,
 			__entry->cold)
 );
 
@@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc,
 	TP_ARGS(page, order, gfp_flags, migratetype),
 
 	TP_STRUCT__entry(
-		__field(	struct page *,	page		)
+		__field(	unsigned long,	pfn		)
 		__field(	unsigned int,	order		)
 		__field(	gfp_t,		gfp_flags	)
 		__field(	int,		migratetype	)
 	),
 
 	TP_fast_assign(
-		__entry->page		= page;
+		__entry->pfn		= page ? page_to_pfn(page) : -1UL;
 		__entry->order		= order;
 		__entry->gfp_flags	= gfp_flags;
 		__entry->migratetype	= migratetype;
 	),
 
 	TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
-		__entry->page,
-		__entry->page ? page_to_pfn(__entry->page) : 0,
+		__entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
+		__entry->pfn != -1UL ? __entry->pfn : 0,
 		__entry->order,
 		__entry->migratetype,
 		show_gfp_flags(__entry->gfp_flags))
@@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page,
 	TP_ARGS(page, order, migratetype),
 
 	TP_STRUCT__entry(
-		__field(	struct page *,	page		)
+		__field(	unsigned long,	pfn		)
 		__field(	unsigned int,	order		)
 		__field(	int,		migratetype	)
 	),
 
 	TP_fast_assign(
-		__entry->page		= page;
+		__entry->pfn		= page ? page_to_pfn(page) : -1UL;
 		__entry->order		= order;
 		__entry->migratetype	= migratetype;
 	),
 
 	TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
-		__entry->page,
-		__entry->page ? page_to_pfn(__entry->page) : 0,
+		__entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
+		__entry->pfn != -1UL ? __entry->pfn : 0,
 		__entry->order,
 		__entry->migratetype,
 		__entry->order == 0)
@@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain,
 	TP_ARGS(page, order, migratetype),
 
 	TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
-		__entry->page, page_to_pfn(__entry->page),
+		pfn_to_page(__entry->pfn), __entry->pfn,
 		__entry->order, __entry->migratetype)
 );
 
@@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
 		alloc_migratetype, fallback_migratetype),
 
 	TP_STRUCT__entry(
-		__field(	struct page *,	page			)
+		__field(	unsigned long,	pfn			)
 		__field(	int,		alloc_order		)
 		__field(	int,		fallback_order		)
 		__field(	int,		alloc_migratetype	)
@@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
 	),
 
 	TP_fast_assign(
-		__entry->page			= page;
+		__entry->pfn			= page_to_pfn(page);
 		__entry->alloc_order		= alloc_order;
 		__entry->fallback_order		= fallback_order;
 		__entry->alloc_migratetype	= alloc_migratetype;
@@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag,
 	),
 
 	TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
-		__entry->page,
-		page_to_pfn(__entry->page),
+		pfn_to_page(__entry->pfn),
+		__entry->pfn,
 		__entry->alloc_order,
 		__entry->fallback_order,
 		pageblock_order,
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index 69590b6ffc09..f66476b96264 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage,
 	TP_ARGS(page, reclaim_flags),
 
 	TP_STRUCT__entry(
-		__field(struct page *, page)
+		__field(unsigned long, pfn)
 		__field(int, reclaim_flags)
 	),
 
 	TP_fast_assign(
-		__entry->page = page;
+		__entry->pfn = page_to_pfn(page);
 		__entry->reclaim_flags = reclaim_flags;
 	),
 
 	TP_printk("page=%p pfn=%lu flags=%s",
-		__entry->page,
-		page_to_pfn(__entry->page),
+		pfn_to_page(__entry->pfn),
+		__entry->pfn,
 		show_reclaim_flags(__entry->reclaim_flags))
 );
 
-- 
1.9.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/5] perf kmem: Analyze page allocator events also
  2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo
  2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo
@ 2015-04-13 22:14 ` Arnaldo Carvalho de Melo
  2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu
  2 siblings, 0 replies; 16+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-04-13 22:14 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Joonsoo Kim,
	Minchan Kim, Peter Zijlstra, linux-mm, Arnaldo Carvalho de Melo

From: Namhyung Kim <namhyung@kernel.org>

The perf kmem command records and analyze kernel memory allocation only
for SLAB objects.  This patch implement a simple page allocator analyzer
using kmem:mm_page_alloc and kmem:mm_page_free events.

It adds two new options of --slab and --page.  The --slab option is for
analyzing SLAB allocator and that's what perf kmem currently does.

The new --page option enables page allocator events and analyze kernel
memory usage in page unit.  Currently, 'stat --alloc' subcommand is
implemented only.

If none of these --slab nor --page is specified, --slab is implied.

First run 'perf kmem record' to generate a suitable perf.data file:

  # perf kmem record --page sleep 5

Then run 'perf kmem stat' to postprocess the perf.data file:

  # perf kmem stat --page --alloc --line 10

  -------------------------------------------------------------------------------
   PFN              | Total alloc (KB) | Hits     | Order | Mig.type | GFP flags
  -------------------------------------------------------------------------------
            4045014 |               16 |        1 |     2 |  RECLAIM |  00285250
            4143980 |               16 |        1 |     2 |  RECLAIM |  00285250
            3938658 |               16 |        1 |     2 |  RECLAIM |  00285250
            4045400 |               16 |        1 |     2 |  RECLAIM |  00285250
            3568708 |               16 |        1 |     2 |  RECLAIM |  00285250
            3729824 |               16 |        1 |     2 |  RECLAIM |  00285250
            3657210 |               16 |        1 |     2 |  RECLAIM |  00285250
            4120750 |               16 |        1 |     2 |  RECLAIM |  00285250
            3678850 |               16 |        1 |     2 |  RECLAIM |  00285250
            3693874 |               16 |        1 |     2 |  RECLAIM |  00285250
   ...              | ...              | ...      | ...   | ...      | ...
  -------------------------------------------------------------------------------

  SUMMARY (page allocator)
  ========================
  Total allocation requests     :           44,260   [          177,256 KB ]
  Total free requests           :              117   [              468 KB ]

  Total alloc+freed requests    :               49   [              196 KB ]
  Total alloc-only requests     :           44,211   [          177,060 KB ]
  Total free-only requests      :               68   [              272 KB ]

  Total allocation failures     :                0   [                0 KB ]

  Order     Unmovable   Reclaimable       Movable      Reserved  CMA/Isolated
  -----  ------------  ------------  ------------  ------------  ------------
      0            32             .        44,210             .             .
      1             .             .             .             .             .
      2             .            18             .             .             .
      3             .             .             .             .             .
      4             .             .             .             .             .
      5             .             .             .             .             .
      6             .             .             .             .             .
      7             .             .             .             .             .
      8             .             .             .             .             .
      9             .             .             .             .             .
     10             .             .             .             .             .

Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/1428298576-9785-4-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
 tools/perf/Documentation/perf-kmem.txt |   8 +-
 tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
 2 files changed, 491 insertions(+), 17 deletions(-)

diff --git a/tools/perf/Documentation/perf-kmem.txt b/tools/perf/Documentation/perf-kmem.txt
index 150253cc3c97..23219c65c16f 100644
--- a/tools/perf/Documentation/perf-kmem.txt
+++ b/tools/perf/Documentation/perf-kmem.txt
@@ -3,7 +3,7 @@ perf-kmem(1)
 
 NAME
 ----
-perf-kmem - Tool to trace/measure kernel memory(slab) properties
+perf-kmem - Tool to trace/measure kernel memory properties
 
 SYNOPSIS
 --------
@@ -46,6 +46,12 @@ OPTIONS
 --raw-ip::
 	Print raw ip instead of symbol
 
+--slab::
+	Analyze SLAB allocator events.
+
+--page::
+	Analyze page allocator events
+
 SEE ALSO
 --------
 linkperf:perf-record[1]
diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index 4ebf65c79434..63ea01349b6e 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -22,6 +22,11 @@
 #include <linux/string.h>
 #include <locale.h>
 
+static int	kmem_slab;
+static int	kmem_page;
+
+static long	kmem_page_size;
+
 struct alloc_stat;
 typedef int (*sort_fn_t)(struct alloc_stat *, struct alloc_stat *);
 
@@ -226,6 +231,244 @@ static int perf_evsel__process_free_event(struct perf_evsel *evsel,
 	return 0;
 }
 
+static u64 total_page_alloc_bytes;
+static u64 total_page_free_bytes;
+static u64 total_page_nomatch_bytes;
+static u64 total_page_fail_bytes;
+static unsigned long nr_page_allocs;
+static unsigned long nr_page_frees;
+static unsigned long nr_page_fails;
+static unsigned long nr_page_nomatch;
+
+static bool use_pfn;
+
+#define MAX_MIGRATE_TYPES  6
+#define MAX_PAGE_ORDER     11
+
+static int order_stats[MAX_PAGE_ORDER][MAX_MIGRATE_TYPES];
+
+struct page_stat {
+	struct rb_node 	node;
+	u64 		page;
+	int 		order;
+	unsigned 	gfp_flags;
+	unsigned 	migrate_type;
+	u64		alloc_bytes;
+	u64 		free_bytes;
+	int 		nr_alloc;
+	int 		nr_free;
+};
+
+static struct rb_root page_tree;
+static struct rb_root page_alloc_tree;
+static struct rb_root page_alloc_sorted;
+
+static struct page_stat *search_page(unsigned long page, bool create)
+{
+	struct rb_node **node = &page_tree.rb_node;
+	struct rb_node *parent = NULL;
+	struct page_stat *data;
+
+	while (*node) {
+		s64 cmp;
+
+		parent = *node;
+		data = rb_entry(*node, struct page_stat, node);
+
+		cmp = data->page - page;
+		if (cmp < 0)
+			node = &parent->rb_left;
+		else if (cmp > 0)
+			node = &parent->rb_right;
+		else
+			return data;
+	}
+
+	if (!create)
+		return NULL;
+
+	data = zalloc(sizeof(*data));
+	if (data != NULL) {
+		data->page = page;
+
+		rb_link_node(&data->node, parent, node);
+		rb_insert_color(&data->node, &page_tree);
+	}
+
+	return data;
+}
+
+static int page_stat_cmp(struct page_stat *a, struct page_stat *b)
+{
+	if (a->page > b->page)
+		return -1;
+	if (a->page < b->page)
+		return 1;
+	if (a->order > b->order)
+		return -1;
+	if (a->order < b->order)
+		return 1;
+	if (a->migrate_type > b->migrate_type)
+		return -1;
+	if (a->migrate_type < b->migrate_type)
+		return 1;
+	if (a->gfp_flags > b->gfp_flags)
+		return -1;
+	if (a->gfp_flags < b->gfp_flags)
+		return 1;
+	return 0;
+}
+
+static struct page_stat *search_page_alloc_stat(struct page_stat *stat, bool create)
+{
+	struct rb_node **node = &page_alloc_tree.rb_node;
+	struct rb_node *parent = NULL;
+	struct page_stat *data;
+
+	while (*node) {
+		s64 cmp;
+
+		parent = *node;
+		data = rb_entry(*node, struct page_stat, node);
+
+		cmp = page_stat_cmp(data, stat);
+		if (cmp < 0)
+			node = &parent->rb_left;
+		else if (cmp > 0)
+			node = &parent->rb_right;
+		else
+			return data;
+	}
+
+	if (!create)
+		return NULL;
+
+	data = zalloc(sizeof(*data));
+	if (data != NULL) {
+		data->page = stat->page;
+		data->order = stat->order;
+		data->gfp_flags = stat->gfp_flags;
+		data->migrate_type = stat->migrate_type;
+
+		rb_link_node(&data->node, parent, node);
+		rb_insert_color(&data->node, &page_alloc_tree);
+	}
+
+	return data;
+}
+
+static bool valid_page(u64 pfn_or_page)
+{
+	if (use_pfn && pfn_or_page == -1UL)
+		return false;
+	if (!use_pfn && pfn_or_page == 0)
+		return false;
+	return true;
+}
+
+static int perf_evsel__process_page_alloc_event(struct perf_evsel *evsel,
+						struct perf_sample *sample)
+{
+	u64 page;
+	unsigned int order = perf_evsel__intval(evsel, sample, "order");
+	unsigned int gfp_flags = perf_evsel__intval(evsel, sample, "gfp_flags");
+	unsigned int migrate_type = perf_evsel__intval(evsel, sample,
+						       "migratetype");
+	u64 bytes = kmem_page_size << order;
+	struct page_stat *stat;
+	struct page_stat this = {
+		.order = order,
+		.gfp_flags = gfp_flags,
+		.migrate_type = migrate_type,
+	};
+
+	if (use_pfn)
+		page = perf_evsel__intval(evsel, sample, "pfn");
+	else
+		page = perf_evsel__intval(evsel, sample, "page");
+
+	nr_page_allocs++;
+	total_page_alloc_bytes += bytes;
+
+	if (!valid_page(page)) {
+		nr_page_fails++;
+		total_page_fail_bytes += bytes;
+
+		return 0;
+	}
+
+	/*
+	 * This is to find the current page (with correct gfp flags and
+	 * migrate type) at free event.
+	 */
+	stat = search_page(page, true);
+	if (stat == NULL)
+		return -ENOMEM;
+
+	stat->order = order;
+	stat->gfp_flags = gfp_flags;
+	stat->migrate_type = migrate_type;
+
+	this.page = page;
+	stat = search_page_alloc_stat(&this, true);
+	if (stat == NULL)
+		return -ENOMEM;
+
+	stat->nr_alloc++;
+	stat->alloc_bytes += bytes;
+
+	order_stats[order][migrate_type]++;
+
+	return 0;
+}
+
+static int perf_evsel__process_page_free_event(struct perf_evsel *evsel,
+						struct perf_sample *sample)
+{
+	u64 page;
+	unsigned int order = perf_evsel__intval(evsel, sample, "order");
+	u64 bytes = kmem_page_size << order;
+	struct page_stat *stat;
+	struct page_stat this = {
+		.order = order,
+	};
+
+	if (use_pfn)
+		page = perf_evsel__intval(evsel, sample, "pfn");
+	else
+		page = perf_evsel__intval(evsel, sample, "page");
+
+	nr_page_frees++;
+	total_page_free_bytes += bytes;
+
+	stat = search_page(page, false);
+	if (stat == NULL) {
+		pr_debug2("missing free at page %"PRIx64" (order: %d)\n",
+			  page, order);
+
+		nr_page_nomatch++;
+		total_page_nomatch_bytes += bytes;
+
+		return 0;
+	}
+
+	this.page = page;
+	this.gfp_flags = stat->gfp_flags;
+	this.migrate_type = stat->migrate_type;
+
+	rb_erase(&stat->node, &page_tree);
+	free(stat);
+
+	stat = search_page_alloc_stat(&this, false);
+	if (stat == NULL)
+		return -ENOENT;
+
+	stat->nr_free++;
+	stat->free_bytes += bytes;
+
+	return 0;
+}
+
 typedef int (*tracepoint_handler)(struct perf_evsel *evsel,
 				  struct perf_sample *sample);
 
@@ -270,8 +513,9 @@ static double fragmentation(unsigned long n_req, unsigned long n_alloc)
 		return 100.0 - (100.0 * n_req / n_alloc);
 }
 
-static void __print_result(struct rb_root *root, struct perf_session *session,
-			   int n_lines, int is_caller)
+static void __print_slab_result(struct rb_root *root,
+				struct perf_session *session,
+				int n_lines, int is_caller)
 {
 	struct rb_node *next;
 	struct machine *machine = &session->machines.host;
@@ -323,9 +567,56 @@ static void __print_result(struct rb_root *root, struct perf_session *session,
 	printf("%.105s\n", graph_dotted_line);
 }
 
-static void print_summary(void)
+static const char * const migrate_type_str[] = {
+	"UNMOVABL",
+	"RECLAIM",
+	"MOVABLE",
+	"RESERVED",
+	"CMA/ISLT",
+	"UNKNOWN",
+};
+
+static void __print_page_result(struct rb_root *root,
+				struct perf_session *session __maybe_unused,
+				int n_lines)
+{
+	struct rb_node *next = rb_first(root);
+	const char *format;
+
+	printf("\n%.80s\n", graph_dotted_line);
+	printf(" %-16s | Total alloc (KB) | Hits      | Order | Mig.type | GFP flags\n",
+	       use_pfn ? "PFN" : "Page");
+	printf("%.80s\n", graph_dotted_line);
+
+	if (use_pfn)
+		format = " %16llu | %'16llu | %'9d | %5d | %8s |  %08lx\n";
+	else
+		format = " %016llx | %'16llu | %'9d | %5d | %8s |  %08lx\n";
+
+	while (next && n_lines--) {
+		struct page_stat *data;
+
+		data = rb_entry(next, struct page_stat, node);
+
+		printf(format, (unsigned long long)data->page,
+		       (unsigned long long)data->alloc_bytes / 1024,
+		       data->nr_alloc, data->order,
+		       migrate_type_str[data->migrate_type],
+		       (unsigned long)data->gfp_flags);
+
+		next = rb_next(next);
+	}
+
+	if (n_lines == -1)
+		printf(" ...              | ...              | ...       | ...   | ...      | ...     \n");
+
+	printf("%.80s\n", graph_dotted_line);
+}
+
+static void print_slab_summary(void)
 {
-	printf("\nSUMMARY\n=======\n");
+	printf("\nSUMMARY (SLAB allocator)");
+	printf("\n========================\n");
 	printf("Total bytes requested: %'lu\n", total_requested);
 	printf("Total bytes allocated: %'lu\n", total_allocated);
 	printf("Total bytes wasted on internal fragmentation: %'lu\n",
@@ -335,13 +626,73 @@ static void print_summary(void)
 	printf("Cross CPU allocations: %'lu/%'lu\n", nr_cross_allocs, nr_allocs);
 }
 
-static void print_result(struct perf_session *session)
+static void print_page_summary(void)
+{
+	int o, m;
+	u64 nr_alloc_freed = nr_page_frees - nr_page_nomatch;
+	u64 total_alloc_freed_bytes = total_page_free_bytes - total_page_nomatch_bytes;
+
+	printf("\nSUMMARY (page allocator)");
+	printf("\n========================\n");
+	printf("%-30s: %'16lu   [ %'16"PRIu64" KB ]\n", "Total allocation requests",
+	       nr_page_allocs, total_page_alloc_bytes / 1024);
+	printf("%-30s: %'16lu   [ %'16"PRIu64" KB ]\n", "Total free requests",
+	       nr_page_frees, total_page_free_bytes / 1024);
+	printf("\n");
+
+	printf("%-30s: %'16lu   [ %'16"PRIu64" KB ]\n", "Total alloc+freed requests",
+	       nr_alloc_freed, (total_alloc_freed_bytes) / 1024);
+	printf("%-30s: %'16lu   [ %'16"PRIu64" KB ]\n", "Total alloc-only requests",
+	       nr_page_allocs - nr_alloc_freed,
+	       (total_page_alloc_bytes - total_alloc_freed_bytes) / 1024);
+	printf("%-30s: %'16lu   [ %'16"PRIu64" KB ]\n", "Total free-only requests",
+	       nr_page_nomatch, total_page_nomatch_bytes / 1024);
+	printf("\n");
+
+	printf("%-30s: %'16lu   [ %'16"PRIu64" KB ]\n", "Total allocation failures",
+	       nr_page_fails, total_page_fail_bytes / 1024);
+	printf("\n");
+
+	printf("%5s  %12s  %12s  %12s  %12s  %12s\n", "Order",  "Unmovable",
+	       "Reclaimable", "Movable", "Reserved", "CMA/Isolated");
+	printf("%.5s  %.12s  %.12s  %.12s  %.12s  %.12s\n", graph_dotted_line,
+	       graph_dotted_line, graph_dotted_line, graph_dotted_line,
+	       graph_dotted_line, graph_dotted_line);
+
+	for (o = 0; o < MAX_PAGE_ORDER; o++) {
+		printf("%5d", o);
+		for (m = 0; m < MAX_MIGRATE_TYPES - 1; m++) {
+			if (order_stats[o][m])
+				printf("  %'12d", order_stats[o][m]);
+			else
+				printf("  %12c", '.');
+		}
+		printf("\n");
+	}
+}
+
+static void print_slab_result(struct perf_session *session)
 {
 	if (caller_flag)
-		__print_result(&root_caller_sorted, session, caller_lines, 1);
+		__print_slab_result(&root_caller_sorted, session, caller_lines, 1);
+	if (alloc_flag)
+		__print_slab_result(&root_alloc_sorted, session, alloc_lines, 0);
+	print_slab_summary();
+}
+
+static void print_page_result(struct perf_session *session)
+{
 	if (alloc_flag)
-		__print_result(&root_alloc_sorted, session, alloc_lines, 0);
-	print_summary();
+		__print_page_result(&page_alloc_sorted, session, alloc_lines);
+	print_page_summary();
+}
+
+static void print_result(struct perf_session *session)
+{
+	if (kmem_slab)
+		print_slab_result(session);
+	if (kmem_page)
+		print_page_result(session);
 }
 
 struct sort_dimension {
@@ -353,8 +704,8 @@ struct sort_dimension {
 static LIST_HEAD(caller_sort);
 static LIST_HEAD(alloc_sort);
 
-static void sort_insert(struct rb_root *root, struct alloc_stat *data,
-			struct list_head *sort_list)
+static void sort_slab_insert(struct rb_root *root, struct alloc_stat *data,
+			     struct list_head *sort_list)
 {
 	struct rb_node **new = &(root->rb_node);
 	struct rb_node *parent = NULL;
@@ -383,8 +734,8 @@ static void sort_insert(struct rb_root *root, struct alloc_stat *data,
 	rb_insert_color(&data->node, root);
 }
 
-static void __sort_result(struct rb_root *root, struct rb_root *root_sorted,
-			  struct list_head *sort_list)
+static void __sort_slab_result(struct rb_root *root, struct rb_root *root_sorted,
+			       struct list_head *sort_list)
 {
 	struct rb_node *node;
 	struct alloc_stat *data;
@@ -396,26 +747,79 @@ static void __sort_result(struct rb_root *root, struct rb_root *root_sorted,
 
 		rb_erase(node, root);
 		data = rb_entry(node, struct alloc_stat, node);
-		sort_insert(root_sorted, data, sort_list);
+		sort_slab_insert(root_sorted, data, sort_list);
+	}
+}
+
+static void sort_page_insert(struct rb_root *root, struct page_stat *data)
+{
+	struct rb_node **new = &root->rb_node;
+	struct rb_node *parent = NULL;
+
+	while (*new) {
+		struct page_stat *this;
+		int cmp = 0;
+
+		this = rb_entry(*new, struct page_stat, node);
+		parent = *new;
+
+		/* TODO: support more sort key */
+		cmp = data->alloc_bytes - this->alloc_bytes;
+
+		if (cmp > 0)
+			new = &parent->rb_left;
+		else
+			new = &parent->rb_right;
+	}
+
+	rb_link_node(&data->node, parent, new);
+	rb_insert_color(&data->node, root);
+}
+
+static void __sort_page_result(struct rb_root *root, struct rb_root *root_sorted)
+{
+	struct rb_node *node;
+	struct page_stat *data;
+
+	for (;;) {
+		node = rb_first(root);
+		if (!node)
+			break;
+
+		rb_erase(node, root);
+		data = rb_entry(node, struct page_stat, node);
+		sort_page_insert(root_sorted, data);
 	}
 }
 
 static void sort_result(void)
 {
-	__sort_result(&root_alloc_stat, &root_alloc_sorted, &alloc_sort);
-	__sort_result(&root_caller_stat, &root_caller_sorted, &caller_sort);
+	if (kmem_slab) {
+		__sort_slab_result(&root_alloc_stat, &root_alloc_sorted,
+				   &alloc_sort);
+		__sort_slab_result(&root_caller_stat, &root_caller_sorted,
+				   &caller_sort);
+	}
+	if (kmem_page) {
+		__sort_page_result(&page_alloc_tree, &page_alloc_sorted);
+	}
 }
 
 static int __cmd_kmem(struct perf_session *session)
 {
 	int err = -EINVAL;
+	struct perf_evsel *evsel;
 	const struct perf_evsel_str_handler kmem_tracepoints[] = {
+		/* slab allocator */
 		{ "kmem:kmalloc",		perf_evsel__process_alloc_event, },
     		{ "kmem:kmem_cache_alloc",	perf_evsel__process_alloc_event, },
 		{ "kmem:kmalloc_node",		perf_evsel__process_alloc_node_event, },
     		{ "kmem:kmem_cache_alloc_node", perf_evsel__process_alloc_node_event, },
 		{ "kmem:kfree",			perf_evsel__process_free_event, },
     		{ "kmem:kmem_cache_free",	perf_evsel__process_free_event, },
+		/* page allocator */
+		{ "kmem:mm_page_alloc",		perf_evsel__process_page_alloc_event, },
+		{ "kmem:mm_page_free",		perf_evsel__process_page_free_event, },
 	};
 
 	if (!perf_session__has_traces(session, "kmem record"))
@@ -426,10 +830,20 @@ static int __cmd_kmem(struct perf_session *session)
 		goto out;
 	}
 
+	evlist__for_each(session->evlist, evsel) {
+		if (!strcmp(perf_evsel__name(evsel), "kmem:mm_page_alloc") &&
+		    perf_evsel__field(evsel, "pfn")) {
+			use_pfn = true;
+			break;
+		}
+	}
+
 	setup_pager();
 	err = perf_session__process_events(session);
-	if (err != 0)
+	if (err != 0) {
+		pr_err("error during process events: %d\n", err);
 		goto out;
+	}
 	sort_result();
 	print_result(session);
 out:
@@ -612,6 +1026,22 @@ static int parse_alloc_opt(const struct option *opt __maybe_unused,
 	return 0;
 }
 
+static int parse_slab_opt(const struct option *opt __maybe_unused,
+			  const char *arg __maybe_unused,
+			  int unset __maybe_unused)
+{
+	kmem_slab = (kmem_page + 1);
+	return 0;
+}
+
+static int parse_page_opt(const struct option *opt __maybe_unused,
+			  const char *arg __maybe_unused,
+			  int unset __maybe_unused)
+{
+	kmem_page = (kmem_slab + 1);
+	return 0;
+}
+
 static int parse_line_opt(const struct option *opt __maybe_unused,
 			  const char *arg, int unset __maybe_unused)
 {
@@ -634,6 +1064,8 @@ static int __cmd_record(int argc, const char **argv)
 {
 	const char * const record_args[] = {
 	"record", "-a", "-R", "-c", "1",
+	};
+	const char * const slab_events[] = {
 	"-e", "kmem:kmalloc",
 	"-e", "kmem:kmalloc_node",
 	"-e", "kmem:kfree",
@@ -641,10 +1073,19 @@ static int __cmd_record(int argc, const char **argv)
 	"-e", "kmem:kmem_cache_alloc_node",
 	"-e", "kmem:kmem_cache_free",
 	};
+	const char * const page_events[] = {
+	"-e", "kmem:mm_page_alloc",
+	"-e", "kmem:mm_page_free",
+	};
 	unsigned int rec_argc, i, j;
 	const char **rec_argv;
 
 	rec_argc = ARRAY_SIZE(record_args) + argc - 1;
+	if (kmem_slab)
+		rec_argc += ARRAY_SIZE(slab_events);
+	if (kmem_page)
+		rec_argc += ARRAY_SIZE(page_events);
+
 	rec_argv = calloc(rec_argc + 1, sizeof(char *));
 
 	if (rec_argv == NULL)
@@ -653,6 +1094,15 @@ static int __cmd_record(int argc, const char **argv)
 	for (i = 0; i < ARRAY_SIZE(record_args); i++)
 		rec_argv[i] = strdup(record_args[i]);
 
+	if (kmem_slab) {
+		for (j = 0; j < ARRAY_SIZE(slab_events); j++, i++)
+			rec_argv[i] = strdup(slab_events[j]);
+	}
+	if (kmem_page) {
+		for (j = 0; j < ARRAY_SIZE(page_events); j++, i++)
+			rec_argv[i] = strdup(page_events[j]);
+	}
+
 	for (j = 1; j < (unsigned int)argc; j++, i++)
 		rec_argv[i] = argv[j];
 
@@ -679,6 +1129,10 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused)
 	OPT_CALLBACK('l', "line", NULL, "num", "show n lines", parse_line_opt),
 	OPT_BOOLEAN(0, "raw-ip", &raw_ip, "show raw ip instead of symbol"),
 	OPT_BOOLEAN('f', "force", &file.force, "don't complain, do it"),
+	OPT_CALLBACK_NOOPT(0, "slab", NULL, NULL, "Analyze slab allocator",
+			   parse_slab_opt),
+	OPT_CALLBACK_NOOPT(0, "page", NULL, NULL, "Analyze page allocator",
+			   parse_page_opt),
 	OPT_END()
 	};
 	const char *const kmem_subcommands[] = { "record", "stat", NULL };
@@ -695,6 +1149,9 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused)
 	if (!argc)
 		usage_with_options(kmem_usage, kmem_options);
 
+	if (kmem_slab == 0 && kmem_page == 0)
+		kmem_slab = 1;  /* for backward compatibility */
+
 	if (!strncmp(argv[0], "rec", 3)) {
 		symbol__init(NULL);
 		return __cmd_record(argc, argv);
@@ -706,6 +1163,17 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused)
 	if (session == NULL)
 		return -1;
 
+	if (kmem_page) {
+		struct perf_evsel *evsel = perf_evlist__first(session->evlist);
+
+		if (evsel == NULL || evsel->tp_format == NULL) {
+			pr_err("invalid event found.. aborting\n");
+			return -1;
+		}
+
+		kmem_page_size = pevent_get_page_size(evsel->tp_format->pevent);
+	}
+
 	symbol__init(&session->header.env);
 
 	if (!strcmp(argv[0], "stat")) {
-- 
1.9.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [GIT PULL 0/5] perf/core improvements and fixes
  2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo
  2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo
  2015-04-13 22:14 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Arnaldo Carvalho de Melo
@ 2015-04-13 22:33 ` Masami Hiramatsu
  2015-04-13 23:09   ` Arnaldo Carvalho de Melo
  2 siblings, 1 reply; 16+ messages in thread
From: Masami Hiramatsu @ 2015-04-13 22:33 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa,
	Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra,
	Steven Rostedt, Wang Nan, Arnaldo Carvalho de Melo

Hi, Arnaldo,

>       perf probe: Make --source avaiable when probe with lazy_line

No, could you pull Naohiro's patch?
I'd like to move get_real_path to probe_finder.c

Thank you,

(2015/04/14 7:14), Arnaldo Carvalho de Melo wrote:
> Hi Ingo,
> 
> 	Please consider pulling,
> 
> Best regards,
> 
> - Arnaldo
> 
> The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
> 
>   perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
> 
> are available in the git repository at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo
> 
> for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402:
> 
>   perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300)
> 
> ----------------------------------------------------------------
> perf/core improvements and fixes:
> 
> New features:
> 
> - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
> 
> User visible fixes:
> 
> - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
> 
> - lazy_line probe fixes in 'perf probe' (He Kuang)
> 
> Infrastructure:
> 
> - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
> 
> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
> 
> ----------------------------------------------------------------
> He Kuang (3):
>       perf probe: Set retprobe flag when probe in address-based alternative mode
>       perf probe: Make --source avaiable when probe with lazy_line
>       perf probe: Fix segfault when probe with lazy_line to file
> 
> Namhyung Kim (2):
>       tracing, mm: Record pfn instead of pointer to struct page
>       perf kmem: Analyze page allocator events also
> 
>  include/trace/events/filemap.h         |   8 +-
>  include/trace/events/kmem.h            |  42 +--
>  include/trace/events/vmscan.h          |   8 +-
>  tools/perf/Documentation/perf-kmem.txt |   8 +-
>  tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
>  tools/perf/util/probe-event.c          |   3 +-
>  tools/perf/util/probe-event.h          |   2 +
>  tools/perf/util/probe-finder.c         |  20 +-
>  8 files changed, 540 insertions(+), 51 deletions(-)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
> 


-- 
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: masami.hiramatsu.pt@hitachi.com


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [GIT PULL 0/5] perf/core improvements and fixes
  2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu
@ 2015-04-13 23:09   ` Arnaldo Carvalho de Melo
  2015-04-13 23:19     ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 16+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-04-13 23:09 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa,
	Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra,
	Steven Rostedt, Wang Nan

Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> Hi, Arnaldo,
> 
> >       perf probe: Make --source avaiable when probe with lazy_line
> 
> No, could you pull Naohiro's patch?
> I'd like to move get_real_path to probe_finder.c

OOps, yeah, you asked for that... Ingo, please ignore this pull request
for now, thanks,

- Arnaldo
 
> Thank you,
> 
> (2015/04/14 7:14), Arnaldo Carvalho de Melo wrote:
> > Hi Ingo,
> > 
> > 	Please consider pulling,
> > 
> > Best regards,
> > 
> > - Arnaldo
> > 
> > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
> > 
> >   perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
> > 
> > are available in the git repository at:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo
> > 
> > for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402:
> > 
> >   perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300)
> > 
> > ----------------------------------------------------------------
> > perf/core improvements and fixes:
> > 
> > New features:
> > 
> > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
> > 
> > User visible fixes:
> > 
> > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
> > 
> > - lazy_line probe fixes in 'perf probe' (He Kuang)
> > 
> > Infrastructure:
> > 
> > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
> > 
> > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
> > 
> > ----------------------------------------------------------------
> > He Kuang (3):
> >       perf probe: Set retprobe flag when probe in address-based alternative mode
> >       perf probe: Make --source avaiable when probe with lazy_line
> >       perf probe: Fix segfault when probe with lazy_line to file
> > 
> > Namhyung Kim (2):
> >       tracing, mm: Record pfn instead of pointer to struct page
> >       perf kmem: Analyze page allocator events also
> > 
> >  include/trace/events/filemap.h         |   8 +-
> >  include/trace/events/kmem.h            |  42 +--
> >  include/trace/events/vmscan.h          |   8 +-
> >  tools/perf/Documentation/perf-kmem.txt |   8 +-
> >  tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
> >  tools/perf/util/probe-event.c          |   3 +-
> >  tools/perf/util/probe-event.h          |   2 +
> >  tools/perf/util/probe-finder.c         |  20 +-
> >  8 files changed, 540 insertions(+), 51 deletions(-)
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/
> > 
> > 
> 
> 
> -- 
> Masami HIRAMATSU
> Linux Technology Research Center, System Productivity Research Dept.
> Center for Technology Innovation - Systems Engineering
> Hitachi, Ltd., Research & Development Group
> E-mail: masami.hiramatsu.pt@hitachi.com
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [GIT PULL 0/5] perf/core improvements and fixes
  2015-04-13 23:09   ` Arnaldo Carvalho de Melo
@ 2015-04-13 23:19     ` Arnaldo Carvalho de Melo
  2015-04-14  7:04       ` Masami Hiramatsu
  2015-04-14 12:12       ` Ingo Molnar
  0 siblings, 2 replies; 16+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-04-13 23:19 UTC (permalink / raw)
  To: Masami Hiramatsu, Ingo Molnar
  Cc: linux-kernel, David Ahern, He Kuang, Jiri Olsa, Joonsoo Kim,
	linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra,
	Steven Rostedt, Wang Nan

Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> > Hi, Arnaldo,
> > 
> > >       perf probe: Make --source avaiable when probe with lazy_line
> > 
> > No, could you pull Naohiro's patch?
> > I'd like to move get_real_path to probe_finder.c
> 
> OOps, yeah, you asked for that... Ingo, please ignore this pull request
> for now, thanks,

Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
that all is right, ok?

- Arnaldo

The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:

  perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2

for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:

  perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)

----------------------------------------------------------------
perf/core improvements and fixes:

New features:

- Analyze page allocator events also in 'perf kmem' (Namhyung Kim)

User visible fixes:

- Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)

- lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)

Infrastructure:

- Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

----------------------------------------------------------------
He Kuang (2):
      perf probe: Set retprobe flag when probe in address-based alternative mode
      perf probe: Fix segfault when probe with lazy_line to file

Namhyung Kim (2):
      tracing, mm: Record pfn instead of pointer to struct page
      perf kmem: Analyze page allocator events also

Naohiro Aota (1):
      perf probe: Find compilation directory path for lazy matching

 include/trace/events/filemap.h         |   8 +-
 include/trace/events/kmem.h            |  42 +--
 include/trace/events/vmscan.h          |   8 +-
 tools/perf/Documentation/perf-kmem.txt |   8 +-
 tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
 tools/perf/util/probe-event.c          |  60 +---
 tools/perf/util/probe-finder.c         |  73 ++++-
 tools/perf/util/probe-finder.h         |   4 +
 8 files changed, 596 insertions(+), 107 deletions(-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Re: [GIT PULL 0/5] perf/core improvements and fixes
  2015-04-13 23:19     ` Arnaldo Carvalho de Melo
@ 2015-04-14  7:04       ` Masami Hiramatsu
  2015-04-14 12:17         ` Arnaldo Carvalho de Melo
  2015-04-14 12:12       ` Ingo Molnar
  1 sibling, 1 reply; 16+ messages in thread
From: Masami Hiramatsu @ 2015-04-14  7:04 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa,
	Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra,
	Steven Rostedt, Wang Nan

(2015/04/14 8:19), Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
>> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
>>> Hi, Arnaldo,
>>>
>>>>       perf probe: Make --source avaiable when probe with lazy_line
>>>
>>> No, could you pull Naohiro's patch?
>>> I'd like to move get_real_path to probe_finder.c
>>
>> OOps, yeah, you asked for that... Ingo, please ignore this pull request
>> for now, thanks,
> 
> Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
> that all is right, ok?

OK, I've built and tested it :)

Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>

Thank you!

> 
> - Arnaldo
> 
> The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
> 
>   perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
> 
> are available in the git repository at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2
> 
> for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:
> 
>   perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)
> 
> ----------------------------------------------------------------
> perf/core improvements and fixes:
> 
> New features:
> 
> - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
> 
> User visible fixes:
> 
> - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
> 
> - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)
> 
> Infrastructure:
> 
> - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
> 
> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
> 
> ----------------------------------------------------------------
> He Kuang (2):
>       perf probe: Set retprobe flag when probe in address-based alternative mode
>       perf probe: Fix segfault when probe with lazy_line to file
> 
> Namhyung Kim (2):
>       tracing, mm: Record pfn instead of pointer to struct page
>       perf kmem: Analyze page allocator events also
> 
> Naohiro Aota (1):
>       perf probe: Find compilation directory path for lazy matching
> 
>  include/trace/events/filemap.h         |   8 +-
>  include/trace/events/kmem.h            |  42 +--
>  include/trace/events/vmscan.h          |   8 +-
>  tools/perf/Documentation/perf-kmem.txt |   8 +-
>  tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
>  tools/perf/util/probe-event.c          |  60 +---
>  tools/perf/util/probe-finder.c         |  73 ++++-
>  tools/perf/util/probe-finder.h         |   4 +
>  8 files changed, 596 insertions(+), 107 deletions(-)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


-- 
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: masami.hiramatsu.pt@hitachi.com


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [GIT PULL 0/5] perf/core improvements and fixes
  2015-04-13 23:19     ` Arnaldo Carvalho de Melo
  2015-04-14  7:04       ` Masami Hiramatsu
@ 2015-04-14 12:12       ` Ingo Molnar
  1 sibling, 0 replies; 16+ messages in thread
From: Ingo Molnar @ 2015-04-14 12:12 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Masami Hiramatsu, linux-kernel, David Ahern, He Kuang, Jiri Olsa,
	Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra,
	Steven Rostedt, Wang Nan


* Arnaldo Carvalho de Melo <acme@kernel.org> wrote:

> Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> > > Hi, Arnaldo,
> > > 
> > > >       perf probe: Make --source avaiable when probe with lazy_line
> > > 
> > > No, could you pull Naohiro's patch?
> > > I'd like to move get_real_path to probe_finder.c
> > 
> > OOps, yeah, you asked for that... Ingo, please ignore this pull request
> > for now, thanks,
> 
> Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
> that all is right, ok?
> 
> - Arnaldo
> 
> The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
> 
>   perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
> 
> are available in the git repository at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2
> 
> for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:
> 
>   perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)
> 
> ----------------------------------------------------------------
> perf/core improvements and fixes:
> 
> New features:
> 
> - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
> 
> User visible fixes:
> 
> - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
> 
> - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)
> 
> Infrastructure:
> 
> - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
> 
> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
> 
> ----------------------------------------------------------------
> He Kuang (2):
>       perf probe: Set retprobe flag when probe in address-based alternative mode
>       perf probe: Fix segfault when probe with lazy_line to file
> 
> Namhyung Kim (2):
>       tracing, mm: Record pfn instead of pointer to struct page
>       perf kmem: Analyze page allocator events also
> 
> Naohiro Aota (1):
>       perf probe: Find compilation directory path for lazy matching
> 
>  include/trace/events/filemap.h         |   8 +-
>  include/trace/events/kmem.h            |  42 +--
>  include/trace/events/vmscan.h          |   8 +-
>  tools/perf/Documentation/perf-kmem.txt |   8 +-
>  tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
>  tools/perf/util/probe-event.c          |  60 +---
>  tools/perf/util/probe-finder.c         |  73 ++++-
>  tools/perf/util/probe-finder.h         |   4 +
>  8 files changed, 596 insertions(+), 107 deletions(-)

Pulled, thanks a lot Arnaldo!

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Re: [GIT PULL 0/5] perf/core improvements and fixes
  2015-04-14  7:04       ` Masami Hiramatsu
@ 2015-04-14 12:17         ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 16+ messages in thread
From: Arnaldo Carvalho de Melo @ 2015-04-14 12:17 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ingo Molnar, linux-kernel, David Ahern, He Kuang, Jiri Olsa,
	Joonsoo Kim, linux-mm, Minchan Kim, Namhyung Kim, Peter Zijlstra,
	Steven Rostedt, Wang Nan

Em Tue, Apr 14, 2015 at 04:04:29PM +0900, Masami Hiramatsu escreveu:
> (2015/04/14 8:19), Arnaldo Carvalho de Melo wrote:
> > Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
> >> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> >>> Hi, Arnaldo,
> >>>
> >>>>       perf probe: Make --source avaiable when probe with lazy_line
> >>>
> >>> No, could you pull Naohiro's patch?
> >>> I'd like to move get_real_path to probe_finder.c
> >>
> >> OOps, yeah, you asked for that... Ingo, please ignore this pull request
> >> for now, thanks,
> > 
> > Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
> > that all is right, ok?
> 
> OK, I've built and tested it :)
> 
> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
> Tested-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>

Thanks, and sorry for the slip up in getting the right patch as we
agreed in that discussion,

Regards,

- Arnaldo
 
> Thank you!
> 
> > 
> > - Arnaldo
> > 
> > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
> > 
> >   perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
> > 
> > are available in the git repository at:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2
> > 
> > for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:
> > 
> >   perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)
> > 
> > ----------------------------------------------------------------
> > perf/core improvements and fixes:
> > 
> > New features:
> > 
> > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
> > 
> > User visible fixes:
> > 
> > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
> > 
> > - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)
> > 
> > Infrastructure:
> > 
> > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
> > 
> > Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
> > 
> > ----------------------------------------------------------------
> > He Kuang (2):
> >       perf probe: Set retprobe flag when probe in address-based alternative mode
> >       perf probe: Fix segfault when probe with lazy_line to file
> > 
> > Namhyung Kim (2):
> >       tracing, mm: Record pfn instead of pointer to struct page
> >       perf kmem: Analyze page allocator events also
> > 
> > Naohiro Aota (1):
> >       perf probe: Find compilation directory path for lazy matching
> > 
> >  include/trace/events/filemap.h         |   8 +-
> >  include/trace/events/kmem.h            |  42 +--
> >  include/trace/events/vmscan.h          |   8 +-
> >  tools/perf/Documentation/perf-kmem.txt |   8 +-
> >  tools/perf/builtin-kmem.c              | 500 +++++++++++++++++++++++++++++++--
> >  tools/perf/util/probe-event.c          |  60 +---
> >  tools/perf/util/probe-finder.c         |  73 ++++-
> >  tools/perf/util/probe-finder.h         |   4 +
> >  8 files changed, 596 insertions(+), 107 deletions(-)
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/
> > 
> 
> 
> -- 
> Masami HIRAMATSU
> Linux Technology Research Center, System Productivity Research Dept.
> Center for Technology Innovation - Systems Engineering
> Hitachi, Ltd., Research & Development Group
> E-mail: masami.hiramatsu.pt@hitachi.com
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo
@ 2017-07-31  7:43   ` Vlastimil Babka
  2017-08-31 11:38     ` Vlastimil Babka
  2017-08-31 13:43     ` Steven Rostedt
  0 siblings, 2 replies; 16+ messages in thread
From: Vlastimil Babka @ 2017-07-31  7:43 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm

On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
> From: Namhyung Kim <namhyung@kernel.org>
> 
> The struct page is opaque for userspace tools, so it'd be better to save
> pfn in order to identify page frames.
> 
> The textual output of $debugfs/tracing/trace file remains unchanged and
> only raw (binary) data format is changed - but thanks to libtraceevent,
> userspace tools which deal with the raw data (like perf and trace-cmd)
> can parse the format easily.

Hmm it seems trace-cmd doesn't work that well, at least on current
x86_64 kernel where I noticed it:

 trace-cmd-22020 [003] 105219.542610: mm_page_alloc:        [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1

I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
The events/kmem/mm_page_alloc/format file contains this for page:

REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)

I think userspace can't know vmmemap_base nor the implied sizeof(struct
page) for pointer arithmetic?

On older 4.4-based kernel:

REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)

This also fails to parse, so it must be the struct page part?

I think the problem is, even if ve solve this with some more
preprocessor trickery to make the format file contain only constant
numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
more complicated than simple arithmetic, and can't be exported in the
format file.

I'm afraid that to support userspace parsing of the trace data, we will
have to store both struct page and pfn... or perhaps give up on reporting
the struct page pointer completely. Thoughts?

> So impact on the userspace will also be
> minimal.
> 
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> Based-on-patch-by: Joonsoo Kim <js1304@gmail.com>
> Acked-by: Ingo Molnar <mingo@kernel.org>
> Acked-by: Steven Rostedt <rostedt@goodmis.org>
> Cc: David Ahern <dsahern@gmail.com>
> Cc: Jiri Olsa <jolsa@redhat.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
> Cc: linux-mm@kvack.org
> Link: http://lkml.kernel.org/r/1428298576-9785-3-git-send-email-namhyung@kernel.org
> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
> ---
>  include/trace/events/filemap.h |  8 ++++----
>  include/trace/events/kmem.h    | 42 +++++++++++++++++++++---------------------
>  include/trace/events/vmscan.h  |  8 ++++----
>  3 files changed, 29 insertions(+), 29 deletions(-)
> 
> diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
> index 0421f49a20f7..42febb6bc1d5 100644
> --- a/include/trace/events/filemap.h
> +++ b/include/trace/events/filemap.h
> @@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
>  	TP_ARGS(page),
>  
>  	TP_STRUCT__entry(
> -		__field(struct page *, page)
> +		__field(unsigned long, pfn)
>  		__field(unsigned long, i_ino)
>  		__field(unsigned long, index)
>  		__field(dev_t, s_dev)
>  	),
>  
>  	TP_fast_assign(
> -		__entry->page = page;
> +		__entry->pfn = page_to_pfn(page);
>  		__entry->i_ino = page->mapping->host->i_ino;
>  		__entry->index = page->index;
>  		if (page->mapping->host->i_sb)
> @@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
>  	TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu",
>  		MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
>  		__entry->i_ino,
> -		__entry->page,
> -		page_to_pfn(__entry->page),
> +		pfn_to_page(__entry->pfn),
> +		__entry->pfn,
>  		__entry->index << PAGE_SHIFT)
>  );
>  
> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
> index 4ad10baecd4d..81ea59812117 100644
> --- a/include/trace/events/kmem.h
> +++ b/include/trace/events/kmem.h
> @@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free,
>  	TP_ARGS(page, order),
>  
>  	TP_STRUCT__entry(
> -		__field(	struct page *,	page		)
> +		__field(	unsigned long,	pfn		)
>  		__field(	unsigned int,	order		)
>  	),
>  
>  	TP_fast_assign(
> -		__entry->page		= page;
> +		__entry->pfn		= page_to_pfn(page);
>  		__entry->order		= order;
>  	),
>  
>  	TP_printk("page=%p pfn=%lu order=%d",
> -			__entry->page,
> -			page_to_pfn(__entry->page),
> +			pfn_to_page(__entry->pfn),
> +			__entry->pfn,
>  			__entry->order)
>  );
>  
> @@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched,
>  	TP_ARGS(page, cold),
>  
>  	TP_STRUCT__entry(
> -		__field(	struct page *,	page		)
> +		__field(	unsigned long,	pfn		)
>  		__field(	int,		cold		)
>  	),
>  
>  	TP_fast_assign(
> -		__entry->page		= page;
> +		__entry->pfn		= page_to_pfn(page);
>  		__entry->cold		= cold;
>  	),
>  
>  	TP_printk("page=%p pfn=%lu order=0 cold=%d",
> -			__entry->page,
> -			page_to_pfn(__entry->page),
> +			pfn_to_page(__entry->pfn),
> +			__entry->pfn,
>  			__entry->cold)
>  );
>  
> @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc,
>  	TP_ARGS(page, order, gfp_flags, migratetype),
>  
>  	TP_STRUCT__entry(
> -		__field(	struct page *,	page		)
> +		__field(	unsigned long,	pfn		)
>  		__field(	unsigned int,	order		)
>  		__field(	gfp_t,		gfp_flags	)
>  		__field(	int,		migratetype	)
>  	),
>  
>  	TP_fast_assign(
> -		__entry->page		= page;
> +		__entry->pfn		= page ? page_to_pfn(page) : -1UL;
>  		__entry->order		= order;
>  		__entry->gfp_flags	= gfp_flags;
>  		__entry->migratetype	= migratetype;
>  	),
>  
>  	TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
> -		__entry->page,
> -		__entry->page ? page_to_pfn(__entry->page) : 0,
> +		__entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
> +		__entry->pfn != -1UL ? __entry->pfn : 0,
>  		__entry->order,
>  		__entry->migratetype,
>  		show_gfp_flags(__entry->gfp_flags))
> @@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page,
>  	TP_ARGS(page, order, migratetype),
>  
>  	TP_STRUCT__entry(
> -		__field(	struct page *,	page		)
> +		__field(	unsigned long,	pfn		)
>  		__field(	unsigned int,	order		)
>  		__field(	int,		migratetype	)
>  	),
>  
>  	TP_fast_assign(
> -		__entry->page		= page;
> +		__entry->pfn		= page ? page_to_pfn(page) : -1UL;
>  		__entry->order		= order;
>  		__entry->migratetype	= migratetype;
>  	),
>  
>  	TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
> -		__entry->page,
> -		__entry->page ? page_to_pfn(__entry->page) : 0,
> +		__entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
> +		__entry->pfn != -1UL ? __entry->pfn : 0,
>  		__entry->order,
>  		__entry->migratetype,
>  		__entry->order == 0)
> @@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain,
>  	TP_ARGS(page, order, migratetype),
>  
>  	TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
> -		__entry->page, page_to_pfn(__entry->page),
> +		pfn_to_page(__entry->pfn), __entry->pfn,
>  		__entry->order, __entry->migratetype)
>  );
>  
> @@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>  		alloc_migratetype, fallback_migratetype),
>  
>  	TP_STRUCT__entry(
> -		__field(	struct page *,	page			)
> +		__field(	unsigned long,	pfn			)
>  		__field(	int,		alloc_order		)
>  		__field(	int,		fallback_order		)
>  		__field(	int,		alloc_migratetype	)
> @@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>  	),
>  
>  	TP_fast_assign(
> -		__entry->page			= page;
> +		__entry->pfn			= page_to_pfn(page);
>  		__entry->alloc_order		= alloc_order;
>  		__entry->fallback_order		= fallback_order;
>  		__entry->alloc_migratetype	= alloc_migratetype;
> @@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>  	),
>  
>  	TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
> -		__entry->page,
> -		page_to_pfn(__entry->page),
> +		pfn_to_page(__entry->pfn),
> +		__entry->pfn,
>  		__entry->alloc_order,
>  		__entry->fallback_order,
>  		pageblock_order,
> diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
> index 69590b6ffc09..f66476b96264 100644
> --- a/include/trace/events/vmscan.h
> +++ b/include/trace/events/vmscan.h
> @@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage,
>  	TP_ARGS(page, reclaim_flags),
>  
>  	TP_STRUCT__entry(
> -		__field(struct page *, page)
> +		__field(unsigned long, pfn)
>  		__field(int, reclaim_flags)
>  	),
>  
>  	TP_fast_assign(
> -		__entry->page = page;
> +		__entry->pfn = page_to_pfn(page);
>  		__entry->reclaim_flags = reclaim_flags;
>  	),
>  
>  	TP_printk("page=%p pfn=%lu flags=%s",
> -		__entry->page,
> -		page_to_pfn(__entry->page),
> +		pfn_to_page(__entry->pfn),
> +		__entry->pfn,
>  		show_reclaim_flags(__entry->reclaim_flags))
>  );
>  
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2017-07-31  7:43   ` Vlastimil Babka
@ 2017-08-31 11:38     ` Vlastimil Babka
  2017-08-31 13:43     ` Steven Rostedt
  1 sibling, 0 replies; 16+ messages in thread
From: Vlastimil Babka @ 2017-08-31 11:38 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo, Ingo Molnar, Steven Rostedt
  Cc: linux-kernel, Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm

Ping?

On 07/31/2017 09:43 AM, Vlastimil Babka wrote:
> On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
>> From: Namhyung Kim <namhyung@kernel.org>
>>
>> The struct page is opaque for userspace tools, so it'd be better to save
>> pfn in order to identify page frames.
>>
>> The textual output of $debugfs/tracing/trace file remains unchanged and
>> only raw (binary) data format is changed - but thanks to libtraceevent,
>> userspace tools which deal with the raw data (like perf and trace-cmd)
>> can parse the format easily.
> 
> Hmm it seems trace-cmd doesn't work that well, at least on current
> x86_64 kernel where I noticed it:
> 
>  trace-cmd-22020 [003] 105219.542610: mm_page_alloc:        [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1
> 
> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
> The events/kmem/mm_page_alloc/format file contains this for page:
> 
> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)
> 
> I think userspace can't know vmmemap_base nor the implied sizeof(struct
> page) for pointer arithmetic?
> 
> On older 4.4-based kernel:
> 
> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)
> 
> This also fails to parse, so it must be the struct page part?
> 
> I think the problem is, even if ve solve this with some more
> preprocessor trickery to make the format file contain only constant
> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
> more complicated than simple arithmetic, and can't be exported in the
> format file.
> 
> I'm afraid that to support userspace parsing of the trace data, we will
> have to store both struct page and pfn... or perhaps give up on reporting
> the struct page pointer completely. Thoughts?
> 
>> So impact on the userspace will also be
>> minimal.
>>
>> Signed-off-by: Namhyung Kim <namhyung@kernel.org>
>> Based-on-patch-by: Joonsoo Kim <js1304@gmail.com>
>> Acked-by: Ingo Molnar <mingo@kernel.org>
>> Acked-by: Steven Rostedt <rostedt@goodmis.org>
>> Cc: David Ahern <dsahern@gmail.com>
>> Cc: Jiri Olsa <jolsa@redhat.com>
>> Cc: Minchan Kim <minchan@kernel.org>
>> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
>> Cc: linux-mm@kvack.org
>> Link: http://lkml.kernel.org/r/1428298576-9785-3-git-send-email-namhyung@kernel.org
>> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
>> ---
>>  include/trace/events/filemap.h |  8 ++++----
>>  include/trace/events/kmem.h    | 42 +++++++++++++++++++++---------------------
>>  include/trace/events/vmscan.h  |  8 ++++----
>>  3 files changed, 29 insertions(+), 29 deletions(-)
>>
>> diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
>> index 0421f49a20f7..42febb6bc1d5 100644
>> --- a/include/trace/events/filemap.h
>> +++ b/include/trace/events/filemap.h
>> @@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
>>  	TP_ARGS(page),
>>  
>>  	TP_STRUCT__entry(
>> -		__field(struct page *, page)
>> +		__field(unsigned long, pfn)
>>  		__field(unsigned long, i_ino)
>>  		__field(unsigned long, index)
>>  		__field(dev_t, s_dev)
>>  	),
>>  
>>  	TP_fast_assign(
>> -		__entry->page = page;
>> +		__entry->pfn = page_to_pfn(page);
>>  		__entry->i_ino = page->mapping->host->i_ino;
>>  		__entry->index = page->index;
>>  		if (page->mapping->host->i_sb)
>> @@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
>>  	TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu",
>>  		MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
>>  		__entry->i_ino,
>> -		__entry->page,
>> -		page_to_pfn(__entry->page),
>> +		pfn_to_page(__entry->pfn),
>> +		__entry->pfn,
>>  		__entry->index << PAGE_SHIFT)
>>  );
>>  
>> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
>> index 4ad10baecd4d..81ea59812117 100644
>> --- a/include/trace/events/kmem.h
>> +++ b/include/trace/events/kmem.h
>> @@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free,
>>  	TP_ARGS(page, order),
>>  
>>  	TP_STRUCT__entry(
>> -		__field(	struct page *,	page		)
>> +		__field(	unsigned long,	pfn		)
>>  		__field(	unsigned int,	order		)
>>  	),
>>  
>>  	TP_fast_assign(
>> -		__entry->page		= page;
>> +		__entry->pfn		= page_to_pfn(page);
>>  		__entry->order		= order;
>>  	),
>>  
>>  	TP_printk("page=%p pfn=%lu order=%d",
>> -			__entry->page,
>> -			page_to_pfn(__entry->page),
>> +			pfn_to_page(__entry->pfn),
>> +			__entry->pfn,
>>  			__entry->order)
>>  );
>>  
>> @@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched,
>>  	TP_ARGS(page, cold),
>>  
>>  	TP_STRUCT__entry(
>> -		__field(	struct page *,	page		)
>> +		__field(	unsigned long,	pfn		)
>>  		__field(	int,		cold		)
>>  	),
>>  
>>  	TP_fast_assign(
>> -		__entry->page		= page;
>> +		__entry->pfn		= page_to_pfn(page);
>>  		__entry->cold		= cold;
>>  	),
>>  
>>  	TP_printk("page=%p pfn=%lu order=0 cold=%d",
>> -			__entry->page,
>> -			page_to_pfn(__entry->page),
>> +			pfn_to_page(__entry->pfn),
>> +			__entry->pfn,
>>  			__entry->cold)
>>  );
>>  
>> @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc,
>>  	TP_ARGS(page, order, gfp_flags, migratetype),
>>  
>>  	TP_STRUCT__entry(
>> -		__field(	struct page *,	page		)
>> +		__field(	unsigned long,	pfn		)
>>  		__field(	unsigned int,	order		)
>>  		__field(	gfp_t,		gfp_flags	)
>>  		__field(	int,		migratetype	)
>>  	),
>>  
>>  	TP_fast_assign(
>> -		__entry->page		= page;
>> +		__entry->pfn		= page ? page_to_pfn(page) : -1UL;
>>  		__entry->order		= order;
>>  		__entry->gfp_flags	= gfp_flags;
>>  		__entry->migratetype	= migratetype;
>>  	),
>>  
>>  	TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
>> -		__entry->page,
>> -		__entry->page ? page_to_pfn(__entry->page) : 0,
>> +		__entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
>> +		__entry->pfn != -1UL ? __entry->pfn : 0,
>>  		__entry->order,
>>  		__entry->migratetype,
>>  		show_gfp_flags(__entry->gfp_flags))
>> @@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page,
>>  	TP_ARGS(page, order, migratetype),
>>  
>>  	TP_STRUCT__entry(
>> -		__field(	struct page *,	page		)
>> +		__field(	unsigned long,	pfn		)
>>  		__field(	unsigned int,	order		)
>>  		__field(	int,		migratetype	)
>>  	),
>>  
>>  	TP_fast_assign(
>> -		__entry->page		= page;
>> +		__entry->pfn		= page ? page_to_pfn(page) : -1UL;
>>  		__entry->order		= order;
>>  		__entry->migratetype	= migratetype;
>>  	),
>>  
>>  	TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
>> -		__entry->page,
>> -		__entry->page ? page_to_pfn(__entry->page) : 0,
>> +		__entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
>> +		__entry->pfn != -1UL ? __entry->pfn : 0,
>>  		__entry->order,
>>  		__entry->migratetype,
>>  		__entry->order == 0)
>> @@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain,
>>  	TP_ARGS(page, order, migratetype),
>>  
>>  	TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
>> -		__entry->page, page_to_pfn(__entry->page),
>> +		pfn_to_page(__entry->pfn), __entry->pfn,
>>  		__entry->order, __entry->migratetype)
>>  );
>>  
>> @@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>>  		alloc_migratetype, fallback_migratetype),
>>  
>>  	TP_STRUCT__entry(
>> -		__field(	struct page *,	page			)
>> +		__field(	unsigned long,	pfn			)
>>  		__field(	int,		alloc_order		)
>>  		__field(	int,		fallback_order		)
>>  		__field(	int,		alloc_migratetype	)
>> @@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>>  	),
>>  
>>  	TP_fast_assign(
>> -		__entry->page			= page;
>> +		__entry->pfn			= page_to_pfn(page);
>>  		__entry->alloc_order		= alloc_order;
>>  		__entry->fallback_order		= fallback_order;
>>  		__entry->alloc_migratetype	= alloc_migratetype;
>> @@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>>  	),
>>  
>>  	TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
>> -		__entry->page,
>> -		page_to_pfn(__entry->page),
>> +		pfn_to_page(__entry->pfn),
>> +		__entry->pfn,
>>  		__entry->alloc_order,
>>  		__entry->fallback_order,
>>  		pageblock_order,
>> diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
>> index 69590b6ffc09..f66476b96264 100644
>> --- a/include/trace/events/vmscan.h
>> +++ b/include/trace/events/vmscan.h
>> @@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage,
>>  	TP_ARGS(page, reclaim_flags),
>>  
>>  	TP_STRUCT__entry(
>> -		__field(struct page *, page)
>> +		__field(unsigned long, pfn)
>>  		__field(int, reclaim_flags)
>>  	),
>>  
>>  	TP_fast_assign(
>> -		__entry->page = page;
>> +		__entry->pfn = page_to_pfn(page);
>>  		__entry->reclaim_flags = reclaim_flags;
>>  	),
>>  
>>  	TP_printk("page=%p pfn=%lu flags=%s",
>> -		__entry->page,
>> -		page_to_pfn(__entry->page),
>> +		pfn_to_page(__entry->pfn),
>> +		__entry->pfn,
>>  		show_reclaim_flags(__entry->reclaim_flags))
>>  );
>>  
>>
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2017-07-31  7:43   ` Vlastimil Babka
  2017-08-31 11:38     ` Vlastimil Babka
@ 2017-08-31 13:43     ` Steven Rostedt
  2017-08-31 14:31       ` Vlastimil Babka
  1 sibling, 1 reply; 16+ messages in thread
From: Steven Rostedt @ 2017-08-31 13:43 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel,
	Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm

On Mon, 31 Jul 2017 09:43:41 +0200 Vlastimil Babka <vbabka@suse.cz> wrote:

> On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
> > From: Namhyung Kim <namhyung@kernel.org>
> > 
> > The struct page is opaque for userspace tools, so it'd be better to save
> > pfn in order to identify page frames.
> > 
> > The textual output of $debugfs/tracing/trace file remains unchanged and
> > only raw (binary) data format is changed - but thanks to libtraceevent,
> > userspace tools which deal with the raw data (like perf and trace-cmd)
> > can parse the format easily.  
> 
> Hmm it seems trace-cmd doesn't work that well, at least on current
> x86_64 kernel where I noticed it:
> 
>  trace-cmd-22020 [003] 105219.542610: mm_page_alloc:        [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1

Which version of trace-cmd failed? It parses for me. Hmm, the
vmemmap_base isn't in the event format file. It's the actually address.
That's probably what failed to parse.

> 
> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
> The events/kmem/mm_page_alloc/format file contains this for page:
> 
> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)

But yeah, I think the output is wrong. I just ran this:

 page=0xffffea00000a62f4 pfn=680692 order=0 migratetype=0 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK

But running it with trace-cmd report -R (raw format):

 mm_page_alloc:         pfn=0xa62f4 order=0 gfp_flags=24150208 migratetype=0

The parser currently ignores types, so it doesn't do pointer
arithmetic correctly, and would be hard to here as it doesn't know the
size of the struct page. What could work is if we changed the printf
fmt to be:

  (unsigned long)(0xffffea0000000000UL) + (REC->pfn * sizeof(struct page))


> 
> I think userspace can't know vmmemap_base nor the implied sizeof(struct
> page) for pointer arithmetic?
> 
> On older 4.4-based kernel:
> 
> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)

This is what I have on 4.13-rc7

> 
> This also fails to parse, so it must be the struct page part?

Again, what version of trace-cmd do you have?


> 
> I think the problem is, even if ve solve this with some more
> preprocessor trickery to make the format file contain only constant
> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
> more complicated than simple arithmetic, and can't be exported in the
> format file.
> 
> I'm afraid that to support userspace parsing of the trace data, we will
> have to store both struct page and pfn... or perhaps give up on reporting
> the struct page pointer completely. Thoughts?

Had some thoughts up above.

-- Steve

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2017-08-31 13:43     ` Steven Rostedt
@ 2017-08-31 14:31       ` Vlastimil Babka
  2017-08-31 14:44         ` Steven Rostedt
  0 siblings, 1 reply; 16+ messages in thread
From: Vlastimil Babka @ 2017-08-31 14:31 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel,
	Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm

On 08/31/2017 03:43 PM, Steven Rostedt wrote:
> On Mon, 31 Jul 2017 09:43:41 +0200 Vlastimil Babka <vbabka@suse.cz> wrote:
> 
>> On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
>>> From: Namhyung Kim <namhyung@kernel.org>
>>>
>>> The struct page is opaque for userspace tools, so it'd be better to save
>>> pfn in order to identify page frames.
>>>
>>> The textual output of $debugfs/tracing/trace file remains unchanged and
>>> only raw (binary) data format is changed - but thanks to libtraceevent,
>>> userspace tools which deal with the raw data (like perf and trace-cmd)
>>> can parse the format easily.  
>>
>> Hmm it seems trace-cmd doesn't work that well, at least on current
>> x86_64 kernel where I noticed it:
>>
>>  trace-cmd-22020 [003] 105219.542610: mm_page_alloc:        [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1
> 
> Which version of trace-cmd failed? It parses for me. Hmm, the
> vmemmap_base isn't in the event format file. It's the actually address.
> That's probably what failed to parse.

Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE.

> 
>>
>> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
>> The events/kmem/mm_page_alloc/format file contains this for page:
>>
>> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)
> 
> But yeah, I think the output is wrong. I just ran this:
> 
>  page=0xffffea00000a62f4 pfn=680692 order=0 migratetype=0 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK
> 
> But running it with trace-cmd report -R (raw format):
> 
>  mm_page_alloc:         pfn=0xa62f4 order=0 gfp_flags=24150208 migratetype=0
> 
> The parser currently ignores types, so it doesn't do pointer
> arithmetic correctly, and would be hard to here as it doesn't know the
> size of the struct page. What could work is if we changed the printf
> fmt to be:
> 
>   (unsigned long)(0xffffea0000000000UL) + (REC->pfn * sizeof(struct page))
> 
> 
>>
>> I think userspace can't know vmmemap_base nor the implied sizeof(struct
>> page) for pointer arithmetic?
>>
>> On older 4.4-based kernel:
>>
>> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)
> 
> This is what I have on 4.13-rc7
> 
>>
>> This also fails to parse, so it must be the struct page part?
> 
> Again, what version of trace-cmd do you have?

On the older distro it was 2.0.4

> 
>>
>> I think the problem is, even if ve solve this with some more
>> preprocessor trickery to make the format file contain only constant
>> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
>> more complicated than simple arithmetic, and can't be exported in the
>> format file.
>>
>> I'm afraid that to support userspace parsing of the trace data, we will
>> have to store both struct page and pfn... or perhaps give up on reporting
>> the struct page pointer completely. Thoughts?
> 
> Had some thoughts up above.

Yeah, it could be made to work for some configurations, but see the part
about "sparse memory model without vmemmap" above.

> -- Steve
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2017-08-31 14:31       ` Vlastimil Babka
@ 2017-08-31 14:44         ` Steven Rostedt
  2017-09-01  8:16           ` Vlastimil Babka
  0 siblings, 1 reply; 16+ messages in thread
From: Steven Rostedt @ 2017-08-31 14:44 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel,
	Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm

On Thu, 31 Aug 2017 16:31:36 +0200
Vlastimil Babka <vbabka@suse.cz> wrote:


> > Which version of trace-cmd failed? It parses for me. Hmm, the
> > vmemmap_base isn't in the event format file. It's the actually address.
> > That's probably what failed to parse.  
> 
> Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE.

Right, but you have the vmemmap_base in the event format, which can't
be parsed by userspace because it has no idea what the value of the
vmemmap_base is.

> 
> >   
> >>
> >> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
> >> The events/kmem/mm_page_alloc/format file contains this for page:
> >>
> >> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)  
> > 


> >> On older 4.4-based kernel:
> >>
> >> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)  
> > 
> > This is what I have on 4.13-rc7
> >   
> >>
> >> This also fails to parse, so it must be the struct page part?  
> > 
> > Again, what version of trace-cmd do you have?  
> 
> On the older distro it was 2.0.4

Right. That's probably why it failed to parse here. If you installed
the latest trace-cmd from the git repo, it probably will parse fine.

> 
> >   
> >>
> >> I think the problem is, even if ve solve this with some more
> >> preprocessor trickery to make the format file contain only constant
> >> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
> >> more complicated than simple arithmetic, and can't be exported in the
> >> format file.
> >>
> >> I'm afraid that to support userspace parsing of the trace data, we will
> >> have to store both struct page and pfn... or perhaps give up on reporting
> >> the struct page pointer completely. Thoughts?  
> > 
> > Had some thoughts up above.  
> 
> Yeah, it could be made to work for some configurations, but see the part
> about "sparse memory model without vmemmap" above.

Right, but that should work with the latest trace-cmd. Does it?

-- Steve

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2017-08-31 14:44         ` Steven Rostedt
@ 2017-09-01  8:16           ` Vlastimil Babka
  2017-09-01 11:15             ` Steven Rostedt
  0 siblings, 1 reply; 16+ messages in thread
From: Vlastimil Babka @ 2017-09-01  8:16 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel,
	Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm

On 08/31/2017 04:44 PM, Steven Rostedt wrote:
> On Thu, 31 Aug 2017 16:31:36 +0200
> Vlastimil Babka <vbabka@suse.cz> wrote:
> 
> 
>>> Which version of trace-cmd failed? It parses for me. Hmm, the
>>> vmemmap_base isn't in the event format file. It's the actually address.
>>> That's probably what failed to parse.  
>>
>> Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE.
> 
> Right, but you have the vmemmap_base in the event format, which can't
> be parsed by userspace because it has no idea what the value of the
> vmemmap_base is.

This seems to be caused by CONFIG_RANDOMIZE_MEMORY. If we somehow put the value
in the format file, it's an info leak? (but I guess kernels that care must have
ftrace disabled anyway :)

>>
>>>   
>>>>
>>>> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
>>>> The events/kmem/mm_page_alloc/format file contains this for page:
>>>>
>>>> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)  
>>>>
>>>> I think the problem is, even if ve solve this with some more
>>>> preprocessor trickery to make the format file contain only constant
>>>> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
>>>> more complicated than simple arithmetic, and can't be exported in the
>>>> format file.
>>>>
>>>> I'm afraid that to support userspace parsing of the trace data, we will
>>>> have to store both struct page and pfn... or perhaps give up on reporting
>>>> the struct page pointer completely. Thoughts?  
>>>
>>> Had some thoughts up above.  
>>
>> Yeah, it could be made to work for some configurations, but see the part
>> about "sparse memory model without vmemmap" above.
> 
> Right, but that should work with the latest trace-cmd. Does it?

Hmm, by "sparse memory model without vmemmap" I don't mean there's a
number instead of "vmemmap_base". I mean CONFIG_SPARSEMEM=y

Then __pfn_to_page() looks like this:

#define __page_to_pfn(pg)                                       \
({      const struct page *__pg = (pg);                         \
        int __sec = page_to_section(__pg);                      \
        (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \
})

Then the part of format file looks like this:

REC->pfn != -1UL ? ({ unsigned long __pfn = (REC->pfn); struct mem_section *__sec = __pfn_to_section(__pfn); __section_mem_map_addr(__sec) + __pfn; }) : ((void *)0)

The section things involve some array lookups, so I don't see how we
could pass it to tracing userspace. Would we want to special-case
this config to store both pfn and struct page in the trace frame? And
make sure the simpler ones work despite all the exsisting gotchas?
I'd rather say we should either store both pfn and page pointer, or
just throw away the page pointer as the pfn is enough to e.g. match
alloc and free, and also much more deterministic.
 
> -- Steve
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page
  2017-09-01  8:16           ` Vlastimil Babka
@ 2017-09-01 11:15             ` Steven Rostedt
  0 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2017-09-01 11:15 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Arnaldo Carvalho de Melo, Ingo Molnar, linux-kernel,
	Namhyung Kim, David Ahern, Jiri Olsa, Minchan Kim,
	Peter Zijlstra, linux-mm

On Fri, 1 Sep 2017 10:16:21 +0200
Vlastimil Babka <vbabka@suse.cz> wrote:
 
> > Right, but that should work with the latest trace-cmd. Does it?  
> 
> Hmm, by "sparse memory model without vmemmap" I don't mean there's a
> number instead of "vmemmap_base". I mean CONFIG_SPARSEMEM=y
> 
> Then __pfn_to_page() looks like this:
> 
> #define __page_to_pfn(pg)                                       \
> ({      const struct page *__pg = (pg);                         \
>         int __sec = page_to_section(__pg);                      \
>         (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \
> })
> 
> Then the part of format file looks like this:
> 
> REC->pfn != -1UL ? ({ unsigned long __pfn = (REC->pfn); struct mem_section *__sec = __pfn_to_section(__pfn); __section_mem_map_addr(__sec) + __pfn; }) : ((void *)0)

Ouch.

> 
> The section things involve some array lookups, so I don't see how we
> could pass it to tracing userspace. Would we want to special-case
> this config to store both pfn and struct page in the trace frame? And
> make sure the simpler ones work despite all the exsisting gotchas?
> I'd rather say we should either store both pfn and page pointer, or
> just throw away the page pointer as the pfn is enough to e.g. match
> alloc and free, and also much more deterministic.

Write up a patch and we'll take a look.

-- Steve

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-09-01 11:15 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-13 22:14 [GIT PULL 0/5] perf/core improvements and fixes Arnaldo Carvalho de Melo
2015-04-13 22:14 ` [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page Arnaldo Carvalho de Melo
2017-07-31  7:43   ` Vlastimil Babka
2017-08-31 11:38     ` Vlastimil Babka
2017-08-31 13:43     ` Steven Rostedt
2017-08-31 14:31       ` Vlastimil Babka
2017-08-31 14:44         ` Steven Rostedt
2017-09-01  8:16           ` Vlastimil Babka
2017-09-01 11:15             ` Steven Rostedt
2015-04-13 22:14 ` [PATCH 2/5] perf kmem: Analyze page allocator events also Arnaldo Carvalho de Melo
2015-04-13 22:33 ` [GIT PULL 0/5] perf/core improvements and fixes Masami Hiramatsu
2015-04-13 23:09   ` Arnaldo Carvalho de Melo
2015-04-13 23:19     ` Arnaldo Carvalho de Melo
2015-04-14  7:04       ` Masami Hiramatsu
2015-04-14 12:17         ` Arnaldo Carvalho de Melo
2015-04-14 12:12       ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).