All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 1/5] mm/compaction: change tracepoint format from decimal to hexadecimal
@ 2015-01-12  8:21 ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

To check the range that compaction is working, tracepoint print
start/end pfn of zone and start pfn of both scanner with decimal format.
Since we manage all pages in order of 2 and it is well represented by
hexadecimal, this patch change the tracepoint format from decimal to
hexadecimal. This would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/trace/events/compaction.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index c6814b9..1337d9e 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -104,7 +104,7 @@ TRACE_EVENT(mm_compaction_begin,
 		__entry->zone_end = zone_end;
 	),
 
-	TP_printk("zone_start=%lu migrate_start=%lu free_start=%lu zone_end=%lu",
+	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
 		__entry->zone_start,
 		__entry->migrate_start,
 		__entry->free_start,
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 1/5] mm/compaction: change tracepoint format from decimal to hexadecimal
@ 2015-01-12  8:21 ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

To check the range that compaction is working, tracepoint print
start/end pfn of zone and start pfn of both scanner with decimal format.
Since we manage all pages in order of 2 and it is well represented by
hexadecimal, this patch change the tracepoint format from decimal to
hexadecimal. This would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/trace/events/compaction.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index c6814b9..1337d9e 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -104,7 +104,7 @@ TRACE_EVENT(mm_compaction_begin,
 		__entry->zone_end = zone_end;
 	),
 
-	TP_printk("zone_start=%lu migrate_start=%lu free_start=%lu zone_end=%lu",
+	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
 		__entry->zone_start,
 		__entry->migrate_start,
 		__entry->free_start,
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 2/5] mm/compaction: enhance tracepoint output for compaction begin/end
  2015-01-12  8:21 ` Joonsoo Kim
@ 2015-01-12  8:21   ` Joonsoo Kim
  -1 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

We now have tracepoint for begin event of compaction and it prints
start position of both scanners, but, tracepoint for end event of
compaction doesn't print finish position of both scanners. It'd be
also useful to know finish position of both scanners so this patch
add it. It will help to find odd behavior or problem on compaction
internal logic.

And, mode is added to both begin/end tracepoint output, since
according to mode, compaction behavior is quite different.

And, lastly, status format is changed to string rather than
status number for readability.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/compaction.h        |    2 ++
 include/trace/events/compaction.h |   49 ++++++++++++++++++++++++++-----------
 mm/compaction.c                   |   14 +++++++++--
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index 3238ffa..a9547b6 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -12,6 +12,7 @@
 #define COMPACT_PARTIAL		3
 /* The full zone was compacted */
 #define COMPACT_COMPLETE	4
+/* When adding new state, please change compaction_status_string, too */
 
 /* Used to signal whether compaction detected need_sched() or lock contention */
 /* No contention detected */
@@ -22,6 +23,7 @@
 #define COMPACT_CONTENDED_LOCK	2
 
 #ifdef CONFIG_COMPACTION
+extern char *compaction_status_string[];
 extern int sysctl_compact_memory;
 extern int sysctl_compaction_handler(struct ctl_table *table, int write,
 			void __user *buffer, size_t *length, loff_t *ppos);
diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 1337d9e..839f6fa 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -85,46 +85,67 @@ TRACE_EVENT(mm_compaction_migratepages,
 );
 
 TRACE_EVENT(mm_compaction_begin,
-	TP_PROTO(unsigned long zone_start, unsigned long migrate_start,
-		unsigned long free_start, unsigned long zone_end),
+	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
+		unsigned long free_pfn, unsigned long zone_end, bool sync),
 
-	TP_ARGS(zone_start, migrate_start, free_start, zone_end),
+	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync),
 
 	TP_STRUCT__entry(
 		__field(unsigned long, zone_start)
-		__field(unsigned long, migrate_start)
-		__field(unsigned long, free_start)
+		__field(unsigned long, migrate_pfn)
+		__field(unsigned long, free_pfn)
 		__field(unsigned long, zone_end)
+		__field(bool, sync)
 	),
 
 	TP_fast_assign(
 		__entry->zone_start = zone_start;
-		__entry->migrate_start = migrate_start;
-		__entry->free_start = free_start;
+		__entry->migrate_pfn = migrate_pfn;
+		__entry->free_pfn = free_pfn;
 		__entry->zone_end = zone_end;
+		__entry->sync = sync;
 	),
 
-	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
+	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s",
 		__entry->zone_start,
-		__entry->migrate_start,
-		__entry->free_start,
-		__entry->zone_end)
+		__entry->migrate_pfn,
+		__entry->free_pfn,
+		__entry->zone_end,
+		__entry->sync ? "sync" : "async")
 );
 
 TRACE_EVENT(mm_compaction_end,
-	TP_PROTO(int status),
+	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
+		unsigned long free_pfn, unsigned long zone_end, bool sync,
+		int status),
 
-	TP_ARGS(status),
+	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync, status),
 
 	TP_STRUCT__entry(
+		__field(unsigned long, zone_start)
+		__field(unsigned long, migrate_pfn)
+		__field(unsigned long, free_pfn)
+		__field(unsigned long, zone_end)
+		__field(bool, sync)
 		__field(int, status)
 	),
 
 	TP_fast_assign(
+		__entry->zone_start = zone_start;
+		__entry->migrate_pfn = migrate_pfn;
+		__entry->free_pfn = free_pfn;
+		__entry->zone_end = zone_end;
+		__entry->sync = sync;
 		__entry->status = status;
 	),
 
-	TP_printk("status=%d", __entry->status)
+	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s status=%s",
+		__entry->zone_start,
+		__entry->migrate_pfn,
+		__entry->free_pfn,
+		__entry->zone_end,
+		__entry->sync ? "sync" : "async",
+		compaction_status_string[__entry->status])
 );
 
 #endif /* _TRACE_COMPACTION_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 546e571..2d86a20 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -19,6 +19,14 @@
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
+char *compaction_status_string[] = {
+	"deferred",
+	"skipped",
+	"continue",
+	"partial",
+	"complete",
+};
+
 static inline void count_compact_event(enum vm_event_item item)
 {
 	count_vm_event(item);
@@ -1197,7 +1205,8 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 		zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn;
 	}
 
-	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn, cc->free_pfn, end_pfn);
+	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn,
+				cc->free_pfn, end_pfn, sync);
 
 	migrate_prep_local();
 
@@ -1299,7 +1308,8 @@ out:
 			zone->compact_cached_free_pfn = free_pfn;
 	}
 
-	trace_mm_compaction_end(ret);
+	trace_mm_compaction_end(start_pfn, cc->migrate_pfn,
+				cc->free_pfn, end_pfn, sync, ret);
 
 	return ret;
 }
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 2/5] mm/compaction: enhance tracepoint output for compaction begin/end
@ 2015-01-12  8:21   ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

We now have tracepoint for begin event of compaction and it prints
start position of both scanners, but, tracepoint for end event of
compaction doesn't print finish position of both scanners. It'd be
also useful to know finish position of both scanners so this patch
add it. It will help to find odd behavior or problem on compaction
internal logic.

And, mode is added to both begin/end tracepoint output, since
according to mode, compaction behavior is quite different.

And, lastly, status format is changed to string rather than
status number for readability.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/compaction.h        |    2 ++
 include/trace/events/compaction.h |   49 ++++++++++++++++++++++++++-----------
 mm/compaction.c                   |   14 +++++++++--
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index 3238ffa..a9547b6 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -12,6 +12,7 @@
 #define COMPACT_PARTIAL		3
 /* The full zone was compacted */
 #define COMPACT_COMPLETE	4
+/* When adding new state, please change compaction_status_string, too */
 
 /* Used to signal whether compaction detected need_sched() or lock contention */
 /* No contention detected */
@@ -22,6 +23,7 @@
 #define COMPACT_CONTENDED_LOCK	2
 
 #ifdef CONFIG_COMPACTION
+extern char *compaction_status_string[];
 extern int sysctl_compact_memory;
 extern int sysctl_compaction_handler(struct ctl_table *table, int write,
 			void __user *buffer, size_t *length, loff_t *ppos);
diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 1337d9e..839f6fa 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -85,46 +85,67 @@ TRACE_EVENT(mm_compaction_migratepages,
 );
 
 TRACE_EVENT(mm_compaction_begin,
-	TP_PROTO(unsigned long zone_start, unsigned long migrate_start,
-		unsigned long free_start, unsigned long zone_end),
+	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
+		unsigned long free_pfn, unsigned long zone_end, bool sync),
 
-	TP_ARGS(zone_start, migrate_start, free_start, zone_end),
+	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync),
 
 	TP_STRUCT__entry(
 		__field(unsigned long, zone_start)
-		__field(unsigned long, migrate_start)
-		__field(unsigned long, free_start)
+		__field(unsigned long, migrate_pfn)
+		__field(unsigned long, free_pfn)
 		__field(unsigned long, zone_end)
+		__field(bool, sync)
 	),
 
 	TP_fast_assign(
 		__entry->zone_start = zone_start;
-		__entry->migrate_start = migrate_start;
-		__entry->free_start = free_start;
+		__entry->migrate_pfn = migrate_pfn;
+		__entry->free_pfn = free_pfn;
 		__entry->zone_end = zone_end;
+		__entry->sync = sync;
 	),
 
-	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
+	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s",
 		__entry->zone_start,
-		__entry->migrate_start,
-		__entry->free_start,
-		__entry->zone_end)
+		__entry->migrate_pfn,
+		__entry->free_pfn,
+		__entry->zone_end,
+		__entry->sync ? "sync" : "async")
 );
 
 TRACE_EVENT(mm_compaction_end,
-	TP_PROTO(int status),
+	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
+		unsigned long free_pfn, unsigned long zone_end, bool sync,
+		int status),
 
-	TP_ARGS(status),
+	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync, status),
 
 	TP_STRUCT__entry(
+		__field(unsigned long, zone_start)
+		__field(unsigned long, migrate_pfn)
+		__field(unsigned long, free_pfn)
+		__field(unsigned long, zone_end)
+		__field(bool, sync)
 		__field(int, status)
 	),
 
 	TP_fast_assign(
+		__entry->zone_start = zone_start;
+		__entry->migrate_pfn = migrate_pfn;
+		__entry->free_pfn = free_pfn;
+		__entry->zone_end = zone_end;
+		__entry->sync = sync;
 		__entry->status = status;
 	),
 
-	TP_printk("status=%d", __entry->status)
+	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s status=%s",
+		__entry->zone_start,
+		__entry->migrate_pfn,
+		__entry->free_pfn,
+		__entry->zone_end,
+		__entry->sync ? "sync" : "async",
+		compaction_status_string[__entry->status])
 );
 
 #endif /* _TRACE_COMPACTION_H */
diff --git a/mm/compaction.c b/mm/compaction.c
index 546e571..2d86a20 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -19,6 +19,14 @@
 #include "internal.h"
 
 #ifdef CONFIG_COMPACTION
+char *compaction_status_string[] = {
+	"deferred",
+	"skipped",
+	"continue",
+	"partial",
+	"complete",
+};
+
 static inline void count_compact_event(enum vm_event_item item)
 {
 	count_vm_event(item);
@@ -1197,7 +1205,8 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 		zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn;
 	}
 
-	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn, cc->free_pfn, end_pfn);
+	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn,
+				cc->free_pfn, end_pfn, sync);
 
 	migrate_prep_local();
 
@@ -1299,7 +1308,8 @@ out:
 			zone->compact_cached_free_pfn = free_pfn;
 	}
 
-	trace_mm_compaction_end(ret);
+	trace_mm_compaction_end(start_pfn, cc->migrate_pfn,
+				cc->free_pfn, end_pfn, sync, ret);
 
 	return ret;
 }
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 3/5] mm/compaction: print current range where compaction work
  2015-01-12  8:21 ` Joonsoo Kim
@ 2015-01-12  8:21   ` Joonsoo Kim
  -1 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

It'd be useful to know current range where compaction work for detailed
analysis. With it, we can know pageblock where we actually scan and
isolate, and, how much pages we try in that pageblock and can guess why
it doesn't become freepage with pageblock order roughly.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/trace/events/compaction.h |   30 +++++++++++++++++++++++-------
 mm/compaction.c                   |    9 ++++++---
 2 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 839f6fa..139020b 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -11,39 +11,55 @@
 
 DECLARE_EVENT_CLASS(mm_compaction_isolate_template,
 
-	TP_PROTO(unsigned long nr_scanned,
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long nr_scanned,
 		unsigned long nr_taken),
 
-	TP_ARGS(nr_scanned, nr_taken),
+	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken),
 
 	TP_STRUCT__entry(
+		__field(unsigned long, start_pfn)
+		__field(unsigned long, end_pfn)
 		__field(unsigned long, nr_scanned)
 		__field(unsigned long, nr_taken)
 	),
 
 	TP_fast_assign(
+		__entry->start_pfn = start_pfn;
+		__entry->end_pfn = end_pfn;
 		__entry->nr_scanned = nr_scanned;
 		__entry->nr_taken = nr_taken;
 	),
 
-	TP_printk("nr_scanned=%lu nr_taken=%lu",
+	TP_printk("range=(0x%lx ~ 0x%lx) nr_scanned=%lu nr_taken=%lu",
+		__entry->start_pfn,
+		__entry->end_pfn,
 		__entry->nr_scanned,
 		__entry->nr_taken)
 );
 
 DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_migratepages,
 
-	TP_PROTO(unsigned long nr_scanned,
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long nr_scanned,
 		unsigned long nr_taken),
 
-	TP_ARGS(nr_scanned, nr_taken)
+	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
 );
 
 DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_freepages,
-	TP_PROTO(unsigned long nr_scanned,
+
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long nr_scanned,
 		unsigned long nr_taken),
 
-	TP_ARGS(nr_scanned, nr_taken)
+	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
 );
 
 TRACE_EVENT(mm_compaction_migratepages,
diff --git a/mm/compaction.c b/mm/compaction.c
index 2d86a20..be28469 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -429,11 +429,12 @@ isolate_fail:
 
 	}
 
+	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
+					nr_scanned, total_isolated);
+
 	/* Record how far we have got within the block */
 	*start_pfn = blockpfn;
 
-	trace_mm_compaction_isolate_freepages(nr_scanned, total_isolated);
-
 	/*
 	 * If strict isolation is requested by CMA then check that all the
 	 * pages requested were isolated. If there were any failures, 0 is
@@ -589,6 +590,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 	unsigned long flags = 0;
 	bool locked = false;
 	struct page *page = NULL, *valid_page = NULL;
+	unsigned long start_pfn = low_pfn;
 
 	/*
 	 * Ensure that there are not too many pages isolated from the LRU
@@ -749,7 +751,8 @@ isolate_success:
 	if (low_pfn == end_pfn)
 		update_pageblock_skip(cc, valid_page, nr_isolated, true);
 
-	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
+	trace_mm_compaction_isolate_migratepages(start_pfn, low_pfn,
+						nr_scanned, nr_isolated);
 
 	count_compact_events(COMPACTMIGRATE_SCANNED, nr_scanned);
 	if (nr_isolated)
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 3/5] mm/compaction: print current range where compaction work
@ 2015-01-12  8:21   ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

It'd be useful to know current range where compaction work for detailed
analysis. With it, we can know pageblock where we actually scan and
isolate, and, how much pages we try in that pageblock and can guess why
it doesn't become freepage with pageblock order roughly.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/trace/events/compaction.h |   30 +++++++++++++++++++++++-------
 mm/compaction.c                   |    9 ++++++---
 2 files changed, 29 insertions(+), 10 deletions(-)

diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 839f6fa..139020b 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -11,39 +11,55 @@
 
 DECLARE_EVENT_CLASS(mm_compaction_isolate_template,
 
-	TP_PROTO(unsigned long nr_scanned,
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long nr_scanned,
 		unsigned long nr_taken),
 
-	TP_ARGS(nr_scanned, nr_taken),
+	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken),
 
 	TP_STRUCT__entry(
+		__field(unsigned long, start_pfn)
+		__field(unsigned long, end_pfn)
 		__field(unsigned long, nr_scanned)
 		__field(unsigned long, nr_taken)
 	),
 
 	TP_fast_assign(
+		__entry->start_pfn = start_pfn;
+		__entry->end_pfn = end_pfn;
 		__entry->nr_scanned = nr_scanned;
 		__entry->nr_taken = nr_taken;
 	),
 
-	TP_printk("nr_scanned=%lu nr_taken=%lu",
+	TP_printk("range=(0x%lx ~ 0x%lx) nr_scanned=%lu nr_taken=%lu",
+		__entry->start_pfn,
+		__entry->end_pfn,
 		__entry->nr_scanned,
 		__entry->nr_taken)
 );
 
 DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_migratepages,
 
-	TP_PROTO(unsigned long nr_scanned,
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long nr_scanned,
 		unsigned long nr_taken),
 
-	TP_ARGS(nr_scanned, nr_taken)
+	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
 );
 
 DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_freepages,
-	TP_PROTO(unsigned long nr_scanned,
+
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long nr_scanned,
 		unsigned long nr_taken),
 
-	TP_ARGS(nr_scanned, nr_taken)
+	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
 );
 
 TRACE_EVENT(mm_compaction_migratepages,
diff --git a/mm/compaction.c b/mm/compaction.c
index 2d86a20..be28469 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -429,11 +429,12 @@ isolate_fail:
 
 	}
 
+	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
+					nr_scanned, total_isolated);
+
 	/* Record how far we have got within the block */
 	*start_pfn = blockpfn;
 
-	trace_mm_compaction_isolate_freepages(nr_scanned, total_isolated);
-
 	/*
 	 * If strict isolation is requested by CMA then check that all the
 	 * pages requested were isolated. If there were any failures, 0 is
@@ -589,6 +590,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 	unsigned long flags = 0;
 	bool locked = false;
 	struct page *page = NULL, *valid_page = NULL;
+	unsigned long start_pfn = low_pfn;
 
 	/*
 	 * Ensure that there are not too many pages isolated from the LRU
@@ -749,7 +751,8 @@ isolate_success:
 	if (low_pfn == end_pfn)
 		update_pageblock_skip(cc, valid_page, nr_isolated, true);
 
-	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
+	trace_mm_compaction_isolate_migratepages(start_pfn, low_pfn,
+						nr_scanned, nr_isolated);
 
 	count_compact_events(COMPACTMIGRATE_SCANNED, nr_scanned);
 	if (nr_isolated)
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
  2015-01-12  8:21 ` Joonsoo Kim
@ 2015-01-12  8:21   ` Joonsoo Kim
  -1 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

It is not well analyzed that when/why compaction start/finish or not. With
these new tracepoints, we can know much more about start/finish reason of
compaction. I can find following bug with these tracepoint.

http://www.spinics.net/lists/linux-mm/msg81582.html

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/compaction.h        |    3 ++
 include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
 mm/compaction.c                   |   41 ++++++++++++++--
 3 files changed, 134 insertions(+), 4 deletions(-)

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index a9547b6..d82181a 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -12,6 +12,9 @@
 #define COMPACT_PARTIAL		3
 /* The full zone was compacted */
 #define COMPACT_COMPLETE	4
+/* For more detailed tracepoint output */
+#define COMPACT_NO_SUITABLE_PAGE	5
+#define COMPACT_NOT_SUITABLE_ZONE	6
 /* When adding new state, please change compaction_status_string, too */
 
 /* Used to signal whether compaction detected need_sched() or lock contention */
diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 139020b..839dd4f 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
 		compaction_status_string[__entry->status])
 );
 
+TRACE_EVENT(mm_compaction_try_to_compact_pages,
+
+	TP_PROTO(
+		int order,
+		gfp_t gfp_mask,
+		enum migrate_mode mode,
+		int alloc_flags,
+		int classzone_idx),
+
+	TP_ARGS(order, gfp_mask, mode, alloc_flags, classzone_idx),
+
+	TP_STRUCT__entry(
+		__field(int, order)
+		__field(gfp_t, gfp_mask)
+		__field(enum migrate_mode, mode)
+		__field(int, alloc_flags)
+		__field(int, classzone_idx)
+	),
+
+	TP_fast_assign(
+		__entry->order = order;
+		__entry->gfp_mask = gfp_mask;
+		__entry->mode = mode;
+		__entry->alloc_flags = alloc_flags;
+		__entry->classzone_idx = classzone_idx;
+	),
+
+	TP_printk("order=%d gfp_mask=0x%x mode=%d alloc_flags=0x%x classzone_idx=%d",
+		__entry->order,
+		__entry->gfp_mask,
+		(int)__entry->mode,
+		__entry->alloc_flags,
+		__entry->classzone_idx)
+);
+
+DECLARE_EVENT_CLASS(mm_compaction_suitable_template,
+
+	TP_PROTO(struct zone *zone,
+		int order,
+		int alloc_flags,
+		int classzone_idx,
+		int ret),
+
+	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret),
+
+	TP_STRUCT__entry(
+		__field(int, nid)
+		__field(char *, name)
+		__field(int, order)
+		__field(int, alloc_flags)
+		__field(int, classzone_idx)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->nid = zone_to_nid(zone);
+		__entry->name = (char *)zone->name;
+		__entry->order = order;
+		__entry->alloc_flags = alloc_flags;
+		__entry->classzone_idx = classzone_idx;
+		__entry->ret = ret;
+	),
+
+	TP_printk("node=%d zone=%-8s order=%d alloc_flags=0x%x classzone_idx=%d ret=%s",
+		__entry->nid,
+		__entry->name,
+		__entry->order,
+		__entry->alloc_flags,
+		__entry->classzone_idx,
+		compaction_status_string[__entry->ret])
+);
+
+DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_finished,
+
+	TP_PROTO(struct zone *zone,
+		int order,
+		int alloc_flags,
+		int classzone_idx,
+		int ret),
+
+	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
+);
+
+DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
+
+	TP_PROTO(struct zone *zone,
+		int order,
+		int alloc_flags,
+		int classzone_idx,
+		int ret),
+
+	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
+);
+
 #endif /* _TRACE_COMPACTION_H */
 
 /* This part must be outside protection */
diff --git a/mm/compaction.c b/mm/compaction.c
index be28469..7500f01 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -25,6 +25,8 @@ char *compaction_status_string[] = {
 	"continue",
 	"partial",
 	"complete",
+	"no_suitable_page",
+	"not_suitable_zone",
 };
 
 static inline void count_compact_event(enum vm_event_item item)
@@ -1048,7 +1050,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 	return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE;
 }
 
-static int compact_finished(struct zone *zone, struct compact_control *cc,
+static int __compact_finished(struct zone *zone, struct compact_control *cc,
 			    const int migratetype)
 {
 	unsigned int order;
@@ -1103,7 +1105,21 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
 			return COMPACT_PARTIAL;
 	}
 
-	return COMPACT_CONTINUE;
+	return COMPACT_NO_SUITABLE_PAGE;
+}
+
+static int compact_finished(struct zone *zone, struct compact_control *cc,
+			    const int migratetype)
+{
+	int ret;
+
+	ret = __compact_finished(zone, cc, migratetype);
+	trace_mm_compaction_finished(zone, cc->order, cc->alloc_flags,
+						cc->classzone_idx, ret);
+	if (ret == COMPACT_NO_SUITABLE_PAGE)
+		ret = COMPACT_CONTINUE;
+
+	return ret;
 }
 
 /*
@@ -1113,7 +1129,7 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
  *   COMPACT_PARTIAL  - If the allocation would succeed without compaction
  *   COMPACT_CONTINUE - If compaction should run now
  */
-unsigned long compaction_suitable(struct zone *zone, int order,
+static unsigned long __compaction_suitable(struct zone *zone, int order,
 					int alloc_flags, int classzone_idx)
 {
 	int fragindex;
@@ -1157,11 +1173,25 @@ unsigned long compaction_suitable(struct zone *zone, int order,
 	 */
 	fragindex = fragmentation_index(zone, order);
 	if (fragindex >= 0 && fragindex <= sysctl_extfrag_threshold)
-		return COMPACT_SKIPPED;
+		return COMPACT_NOT_SUITABLE_ZONE;
 
 	return COMPACT_CONTINUE;
 }
 
+unsigned long compaction_suitable(struct zone *zone, int order,
+					int alloc_flags, int classzone_idx)
+{
+	unsigned long ret;
+
+	ret = __compaction_suitable(zone, order, alloc_flags, classzone_idx);
+	trace_mm_compaction_suitable(zone, order, alloc_flags,
+						classzone_idx, ret);
+	if (ret == COMPACT_NOT_SUITABLE_ZONE)
+		ret = COMPACT_SKIPPED;
+
+	return ret;
+}
+
 static int compact_zone(struct zone *zone, struct compact_control *cc)
 {
 	int ret;
@@ -1377,6 +1407,9 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
 	if (!order || !may_enter_fs || !may_perform_io)
 		return COMPACT_SKIPPED;
 
+	trace_mm_compaction_try_to_compact_pages(order, gfp_mask, mode,
+					alloc_flags, classzone_idx);
+
 	/* Compact each zone in the list */
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, high_zoneidx,
 								nodemask) {
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
@ 2015-01-12  8:21   ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

It is not well analyzed that when/why compaction start/finish or not. With
these new tracepoints, we can know much more about start/finish reason of
compaction. I can find following bug with these tracepoint.

http://www.spinics.net/lists/linux-mm/msg81582.html

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/compaction.h        |    3 ++
 include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
 mm/compaction.c                   |   41 ++++++++++++++--
 3 files changed, 134 insertions(+), 4 deletions(-)

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index a9547b6..d82181a 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -12,6 +12,9 @@
 #define COMPACT_PARTIAL		3
 /* The full zone was compacted */
 #define COMPACT_COMPLETE	4
+/* For more detailed tracepoint output */
+#define COMPACT_NO_SUITABLE_PAGE	5
+#define COMPACT_NOT_SUITABLE_ZONE	6
 /* When adding new state, please change compaction_status_string, too */
 
 /* Used to signal whether compaction detected need_sched() or lock contention */
diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 139020b..839dd4f 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
 		compaction_status_string[__entry->status])
 );
 
+TRACE_EVENT(mm_compaction_try_to_compact_pages,
+
+	TP_PROTO(
+		int order,
+		gfp_t gfp_mask,
+		enum migrate_mode mode,
+		int alloc_flags,
+		int classzone_idx),
+
+	TP_ARGS(order, gfp_mask, mode, alloc_flags, classzone_idx),
+
+	TP_STRUCT__entry(
+		__field(int, order)
+		__field(gfp_t, gfp_mask)
+		__field(enum migrate_mode, mode)
+		__field(int, alloc_flags)
+		__field(int, classzone_idx)
+	),
+
+	TP_fast_assign(
+		__entry->order = order;
+		__entry->gfp_mask = gfp_mask;
+		__entry->mode = mode;
+		__entry->alloc_flags = alloc_flags;
+		__entry->classzone_idx = classzone_idx;
+	),
+
+	TP_printk("order=%d gfp_mask=0x%x mode=%d alloc_flags=0x%x classzone_idx=%d",
+		__entry->order,
+		__entry->gfp_mask,
+		(int)__entry->mode,
+		__entry->alloc_flags,
+		__entry->classzone_idx)
+);
+
+DECLARE_EVENT_CLASS(mm_compaction_suitable_template,
+
+	TP_PROTO(struct zone *zone,
+		int order,
+		int alloc_flags,
+		int classzone_idx,
+		int ret),
+
+	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret),
+
+	TP_STRUCT__entry(
+		__field(int, nid)
+		__field(char *, name)
+		__field(int, order)
+		__field(int, alloc_flags)
+		__field(int, classzone_idx)
+		__field(int, ret)
+	),
+
+	TP_fast_assign(
+		__entry->nid = zone_to_nid(zone);
+		__entry->name = (char *)zone->name;
+		__entry->order = order;
+		__entry->alloc_flags = alloc_flags;
+		__entry->classzone_idx = classzone_idx;
+		__entry->ret = ret;
+	),
+
+	TP_printk("node=%d zone=%-8s order=%d alloc_flags=0x%x classzone_idx=%d ret=%s",
+		__entry->nid,
+		__entry->name,
+		__entry->order,
+		__entry->alloc_flags,
+		__entry->classzone_idx,
+		compaction_status_string[__entry->ret])
+);
+
+DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_finished,
+
+	TP_PROTO(struct zone *zone,
+		int order,
+		int alloc_flags,
+		int classzone_idx,
+		int ret),
+
+	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
+);
+
+DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
+
+	TP_PROTO(struct zone *zone,
+		int order,
+		int alloc_flags,
+		int classzone_idx,
+		int ret),
+
+	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
+);
+
 #endif /* _TRACE_COMPACTION_H */
 
 /* This part must be outside protection */
diff --git a/mm/compaction.c b/mm/compaction.c
index be28469..7500f01 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -25,6 +25,8 @@ char *compaction_status_string[] = {
 	"continue",
 	"partial",
 	"complete",
+	"no_suitable_page",
+	"not_suitable_zone",
 };
 
 static inline void count_compact_event(enum vm_event_item item)
@@ -1048,7 +1050,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 	return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE;
 }
 
-static int compact_finished(struct zone *zone, struct compact_control *cc,
+static int __compact_finished(struct zone *zone, struct compact_control *cc,
 			    const int migratetype)
 {
 	unsigned int order;
@@ -1103,7 +1105,21 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
 			return COMPACT_PARTIAL;
 	}
 
-	return COMPACT_CONTINUE;
+	return COMPACT_NO_SUITABLE_PAGE;
+}
+
+static int compact_finished(struct zone *zone, struct compact_control *cc,
+			    const int migratetype)
+{
+	int ret;
+
+	ret = __compact_finished(zone, cc, migratetype);
+	trace_mm_compaction_finished(zone, cc->order, cc->alloc_flags,
+						cc->classzone_idx, ret);
+	if (ret == COMPACT_NO_SUITABLE_PAGE)
+		ret = COMPACT_CONTINUE;
+
+	return ret;
 }
 
 /*
@@ -1113,7 +1129,7 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
  *   COMPACT_PARTIAL  - If the allocation would succeed without compaction
  *   COMPACT_CONTINUE - If compaction should run now
  */
-unsigned long compaction_suitable(struct zone *zone, int order,
+static unsigned long __compaction_suitable(struct zone *zone, int order,
 					int alloc_flags, int classzone_idx)
 {
 	int fragindex;
@@ -1157,11 +1173,25 @@ unsigned long compaction_suitable(struct zone *zone, int order,
 	 */
 	fragindex = fragmentation_index(zone, order);
 	if (fragindex >= 0 && fragindex <= sysctl_extfrag_threshold)
-		return COMPACT_SKIPPED;
+		return COMPACT_NOT_SUITABLE_ZONE;
 
 	return COMPACT_CONTINUE;
 }
 
+unsigned long compaction_suitable(struct zone *zone, int order,
+					int alloc_flags, int classzone_idx)
+{
+	unsigned long ret;
+
+	ret = __compaction_suitable(zone, order, alloc_flags, classzone_idx);
+	trace_mm_compaction_suitable(zone, order, alloc_flags,
+						classzone_idx, ret);
+	if (ret == COMPACT_NOT_SUITABLE_ZONE)
+		ret = COMPACT_SKIPPED;
+
+	return ret;
+}
+
 static int compact_zone(struct zone *zone, struct compact_control *cc)
 {
 	int ret;
@@ -1377,6 +1407,9 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
 	if (!order || !may_enter_fs || !may_perform_io)
 		return COMPACT_SKIPPED;
 
+	trace_mm_compaction_try_to_compact_pages(order, gfp_mask, mode,
+					alloc_flags, classzone_idx);
+
 	/* Compact each zone in the list */
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, high_zoneidx,
 								nodemask) {
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
  2015-01-12  8:21 ` Joonsoo Kim
@ 2015-01-12  8:21   ` Joonsoo Kim
  -1 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

compaction deferring logic is heavy hammer that block the way to
the compaction. It doesn't consider overall system state, so it
could prevent user from doing compaction falsely. In other words,
even if system has enough range of memory to compact, compaction would be
skipped due to compaction deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/compaction.h        |   65 +++------------------------------
 include/trace/events/compaction.h |   55 ++++++++++++++++++++++++++++
 mm/compaction.c                   |   72 +++++++++++++++++++++++++++++++++++++
 3 files changed, 132 insertions(+), 60 deletions(-)

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index d82181a..026ff64 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -44,66 +44,11 @@ extern void reset_isolation_suitable(pg_data_t *pgdat);
 extern unsigned long compaction_suitable(struct zone *zone, int order,
 					int alloc_flags, int classzone_idx);
 
-/* Do not skip compaction more than 64 times */
-#define COMPACT_MAX_DEFER_SHIFT 6
-
-/*
- * Compaction is deferred when compaction fails to result in a page
- * allocation success. 1 << compact_defer_limit compactions are skipped up
- * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
- */
-static inline void defer_compaction(struct zone *zone, int order)
-{
-	zone->compact_considered = 0;
-	zone->compact_defer_shift++;
-
-	if (order < zone->compact_order_failed)
-		zone->compact_order_failed = order;
-
-	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
-		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
-}
-
-/* Returns true if compaction should be skipped this time */
-static inline bool compaction_deferred(struct zone *zone, int order)
-{
-	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
-
-	if (order < zone->compact_order_failed)
-		return false;
-
-	/* Avoid possible overflow */
-	if (++zone->compact_considered > defer_limit)
-		zone->compact_considered = defer_limit;
-
-	return zone->compact_considered < defer_limit;
-}
-
-/*
- * Update defer tracking counters after successful compaction of given order,
- * which means an allocation either succeeded (alloc_success == true) or is
- * expected to succeed.
- */
-static inline void compaction_defer_reset(struct zone *zone, int order,
-		bool alloc_success)
-{
-	if (alloc_success) {
-		zone->compact_considered = 0;
-		zone->compact_defer_shift = 0;
-	}
-	if (order >= zone->compact_order_failed)
-		zone->compact_order_failed = order + 1;
-}
-
-/* Returns true if restarting compaction after many failures */
-static inline bool compaction_restarting(struct zone *zone, int order)
-{
-	if (order < zone->compact_order_failed)
-		return false;
-
-	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
-		zone->compact_considered >= 1UL << zone->compact_defer_shift;
-}
+extern void defer_compaction(struct zone *zone, int order);
+extern bool compaction_deferred(struct zone *zone, int order);
+extern void compaction_defer_reset(struct zone *zone, int order,
+				bool alloc_success);
+extern bool compaction_restarting(struct zone *zone, int order);
 
 #else
 static inline unsigned long try_to_compact_pages(struct zonelist *zonelist,
diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 839dd4f..f879f41 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -258,6 +258,61 @@ DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
 	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
 );
 
+DECLARE_EVENT_CLASS(mm_compaction_defer_template,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order),
+
+	TP_STRUCT__entry(
+		__field(int, nid)
+		__field(char *, name)
+		__field(int, order)
+		__field(unsigned int, considered)
+		__field(unsigned int, defer_shift)
+		__field(int, order_failed)
+	),
+
+	TP_fast_assign(
+		__entry->nid = zone_to_nid(zone);
+		__entry->name = (char *)zone->name;
+		__entry->order = order;
+		__entry->considered = zone->compact_considered;
+		__entry->defer_shift = zone->compact_defer_shift;
+		__entry->order_failed = zone->compact_order_failed;
+	),
+
+	TP_printk("node=%d zone=%-8s order=%d order_failed=%d reason=%s consider=%u limit=%lu",
+		__entry->nid,
+		__entry->name,
+		__entry->order,
+		__entry->order_failed,
+		__entry->order < __entry->order_failed ? "order" : "try",
+		__entry->considered,
+		1UL << __entry->defer_shift)
+);
+
+DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_deffered,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order)
+);
+
+DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_compaction,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order)
+);
+
+DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_reset,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order)
+);
+
 #endif /* _TRACE_COMPACTION_H */
 
 /* This part must be outside protection */
diff --git a/mm/compaction.c b/mm/compaction.c
index 7500f01..7aa4249 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -123,6 +123,77 @@ static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
 }
 
 #ifdef CONFIG_COMPACTION
+
+/* Do not skip compaction more than 64 times */
+#define COMPACT_MAX_DEFER_SHIFT 6
+
+/*
+ * Compaction is deferred when compaction fails to result in a page
+ * allocation success. 1 << compact_defer_limit compactions are skipped up
+ * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
+ */
+void defer_compaction(struct zone *zone, int order)
+{
+	zone->compact_considered = 0;
+	zone->compact_defer_shift++;
+
+	if (order < zone->compact_order_failed)
+		zone->compact_order_failed = order;
+
+	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
+		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
+
+	trace_mm_compaction_defer_compaction(zone, order);
+}
+
+/* Returns true if compaction should be skipped this time */
+bool compaction_deferred(struct zone *zone, int order)
+{
+	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
+
+	if (order < zone->compact_order_failed)
+		return false;
+
+	/* Avoid possible overflow */
+	if (++zone->compact_considered > defer_limit)
+		zone->compact_considered = defer_limit;
+
+	if (zone->compact_considered >= defer_limit)
+		return false;
+
+	trace_mm_compaction_deffered(zone, order);
+
+	return true;
+}
+
+/*
+ * Update defer tracking counters after successful compaction of given order,
+ * which means an allocation either succeeded (alloc_success == true) or is
+ * expected to succeed.
+ */
+void compaction_defer_reset(struct zone *zone, int order,
+		bool alloc_success)
+{
+	if (alloc_success) {
+		zone->compact_considered = 0;
+		zone->compact_defer_shift = 0;
+	}
+	if (order >= zone->compact_order_failed)
+		zone->compact_order_failed = order + 1;
+
+	trace_mm_compaction_defer_reset(zone, order);
+}
+
+/* Returns true if restarting compaction after many failures */
+bool compaction_restarting(struct zone *zone, int order)
+{
+	if (order < zone->compact_order_failed)
+		return false;
+
+	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
+		zone->compact_considered >= 1UL << zone->compact_defer_shift;
+}
+
 /* Returns true if the pageblock should be scanned for pages to isolate. */
 static inline bool isolation_suitable(struct compact_control *cc,
 					struct page *page)
@@ -1438,6 +1509,7 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
 			 * succeeds in this zone.
 			 */
 			compaction_defer_reset(zone, order, false);
+
 			/*
 			 * It is possible that async compaction aborted due to
 			 * need_resched() and the watermarks were ok thanks to
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
@ 2015-01-12  8:21   ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-12  8:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Mel Gorman, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

compaction deferring logic is heavy hammer that block the way to
the compaction. It doesn't consider overall system state, so it
could prevent user from doing compaction falsely. In other words,
even if system has enough range of memory to compact, compaction would be
skipped due to compaction deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/compaction.h        |   65 +++------------------------------
 include/trace/events/compaction.h |   55 ++++++++++++++++++++++++++++
 mm/compaction.c                   |   72 +++++++++++++++++++++++++++++++++++++
 3 files changed, 132 insertions(+), 60 deletions(-)

diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index d82181a..026ff64 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -44,66 +44,11 @@ extern void reset_isolation_suitable(pg_data_t *pgdat);
 extern unsigned long compaction_suitable(struct zone *zone, int order,
 					int alloc_flags, int classzone_idx);
 
-/* Do not skip compaction more than 64 times */
-#define COMPACT_MAX_DEFER_SHIFT 6
-
-/*
- * Compaction is deferred when compaction fails to result in a page
- * allocation success. 1 << compact_defer_limit compactions are skipped up
- * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
- */
-static inline void defer_compaction(struct zone *zone, int order)
-{
-	zone->compact_considered = 0;
-	zone->compact_defer_shift++;
-
-	if (order < zone->compact_order_failed)
-		zone->compact_order_failed = order;
-
-	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
-		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
-}
-
-/* Returns true if compaction should be skipped this time */
-static inline bool compaction_deferred(struct zone *zone, int order)
-{
-	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
-
-	if (order < zone->compact_order_failed)
-		return false;
-
-	/* Avoid possible overflow */
-	if (++zone->compact_considered > defer_limit)
-		zone->compact_considered = defer_limit;
-
-	return zone->compact_considered < defer_limit;
-}
-
-/*
- * Update defer tracking counters after successful compaction of given order,
- * which means an allocation either succeeded (alloc_success == true) or is
- * expected to succeed.
- */
-static inline void compaction_defer_reset(struct zone *zone, int order,
-		bool alloc_success)
-{
-	if (alloc_success) {
-		zone->compact_considered = 0;
-		zone->compact_defer_shift = 0;
-	}
-	if (order >= zone->compact_order_failed)
-		zone->compact_order_failed = order + 1;
-}
-
-/* Returns true if restarting compaction after many failures */
-static inline bool compaction_restarting(struct zone *zone, int order)
-{
-	if (order < zone->compact_order_failed)
-		return false;
-
-	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
-		zone->compact_considered >= 1UL << zone->compact_defer_shift;
-}
+extern void defer_compaction(struct zone *zone, int order);
+extern bool compaction_deferred(struct zone *zone, int order);
+extern void compaction_defer_reset(struct zone *zone, int order,
+				bool alloc_success);
+extern bool compaction_restarting(struct zone *zone, int order);
 
 #else
 static inline unsigned long try_to_compact_pages(struct zonelist *zonelist,
diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
index 839dd4f..f879f41 100644
--- a/include/trace/events/compaction.h
+++ b/include/trace/events/compaction.h
@@ -258,6 +258,61 @@ DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
 	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
 );
 
+DECLARE_EVENT_CLASS(mm_compaction_defer_template,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order),
+
+	TP_STRUCT__entry(
+		__field(int, nid)
+		__field(char *, name)
+		__field(int, order)
+		__field(unsigned int, considered)
+		__field(unsigned int, defer_shift)
+		__field(int, order_failed)
+	),
+
+	TP_fast_assign(
+		__entry->nid = zone_to_nid(zone);
+		__entry->name = (char *)zone->name;
+		__entry->order = order;
+		__entry->considered = zone->compact_considered;
+		__entry->defer_shift = zone->compact_defer_shift;
+		__entry->order_failed = zone->compact_order_failed;
+	),
+
+	TP_printk("node=%d zone=%-8s order=%d order_failed=%d reason=%s consider=%u limit=%lu",
+		__entry->nid,
+		__entry->name,
+		__entry->order,
+		__entry->order_failed,
+		__entry->order < __entry->order_failed ? "order" : "try",
+		__entry->considered,
+		1UL << __entry->defer_shift)
+);
+
+DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_deffered,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order)
+);
+
+DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_compaction,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order)
+);
+
+DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_reset,
+
+	TP_PROTO(struct zone *zone, int order),
+
+	TP_ARGS(zone, order)
+);
+
 #endif /* _TRACE_COMPACTION_H */
 
 /* This part must be outside protection */
diff --git a/mm/compaction.c b/mm/compaction.c
index 7500f01..7aa4249 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -123,6 +123,77 @@ static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
 }
 
 #ifdef CONFIG_COMPACTION
+
+/* Do not skip compaction more than 64 times */
+#define COMPACT_MAX_DEFER_SHIFT 6
+
+/*
+ * Compaction is deferred when compaction fails to result in a page
+ * allocation success. 1 << compact_defer_limit compactions are skipped up
+ * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
+ */
+void defer_compaction(struct zone *zone, int order)
+{
+	zone->compact_considered = 0;
+	zone->compact_defer_shift++;
+
+	if (order < zone->compact_order_failed)
+		zone->compact_order_failed = order;
+
+	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
+		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
+
+	trace_mm_compaction_defer_compaction(zone, order);
+}
+
+/* Returns true if compaction should be skipped this time */
+bool compaction_deferred(struct zone *zone, int order)
+{
+	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
+
+	if (order < zone->compact_order_failed)
+		return false;
+
+	/* Avoid possible overflow */
+	if (++zone->compact_considered > defer_limit)
+		zone->compact_considered = defer_limit;
+
+	if (zone->compact_considered >= defer_limit)
+		return false;
+
+	trace_mm_compaction_deffered(zone, order);
+
+	return true;
+}
+
+/*
+ * Update defer tracking counters after successful compaction of given order,
+ * which means an allocation either succeeded (alloc_success == true) or is
+ * expected to succeed.
+ */
+void compaction_defer_reset(struct zone *zone, int order,
+		bool alloc_success)
+{
+	if (alloc_success) {
+		zone->compact_considered = 0;
+		zone->compact_defer_shift = 0;
+	}
+	if (order >= zone->compact_order_failed)
+		zone->compact_order_failed = order + 1;
+
+	trace_mm_compaction_defer_reset(zone, order);
+}
+
+/* Returns true if restarting compaction after many failures */
+bool compaction_restarting(struct zone *zone, int order)
+{
+	if (order < zone->compact_order_failed)
+		return false;
+
+	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
+		zone->compact_considered >= 1UL << zone->compact_defer_shift;
+}
+
 /* Returns true if the pageblock should be scanned for pages to isolate. */
 static inline bool isolation_suitable(struct compact_control *cc,
 					struct page *page)
@@ -1438,6 +1509,7 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
 			 * succeeds in this zone.
 			 */
 			compaction_defer_reset(zone, order, false);
+
 			/*
 			 * It is possible that async compaction aborted due to
 			 * need_resched() and the watermarks were ok thanks to
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 1/5] mm/compaction: change tracepoint format from decimal to hexadecimal
  2015-01-12  8:21 ` Joonsoo Kim
@ 2015-01-12 14:23   ` Vlastimil Babka
  -1 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 14:23 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> To check the range that compaction is working, tracepoint print
> start/end pfn of zone and start pfn of both scanner with decimal format.
> Since we manage all pages in order of 2 and it is well represented by
> hexadecimal, this patch change the tracepoint format from decimal to
> hexadecimal. This would improve readability. For example, it makes us
> easily notice whether current scanner try to compact previously
> attempted pageblock or not.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/trace/events/compaction.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index c6814b9..1337d9e 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -104,7 +104,7 @@ TRACE_EVENT(mm_compaction_begin,
>  		__entry->zone_end = zone_end;
>  	),
>  
> -	TP_printk("zone_start=%lu migrate_start=%lu free_start=%lu zone_end=%lu",
> +	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
>  		__entry->zone_start,
>  		__entry->migrate_start,
>  		__entry->free_start,
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 1/5] mm/compaction: change tracepoint format from decimal to hexadecimal
@ 2015-01-12 14:23   ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 14:23 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> To check the range that compaction is working, tracepoint print
> start/end pfn of zone and start pfn of both scanner with decimal format.
> Since we manage all pages in order of 2 and it is well represented by
> hexadecimal, this patch change the tracepoint format from decimal to
> hexadecimal. This would improve readability. For example, it makes us
> easily notice whether current scanner try to compact previously
> attempted pageblock or not.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/trace/events/compaction.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index c6814b9..1337d9e 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -104,7 +104,7 @@ TRACE_EVENT(mm_compaction_begin,
>  		__entry->zone_end = zone_end;
>  	),
>  
> -	TP_printk("zone_start=%lu migrate_start=%lu free_start=%lu zone_end=%lu",
> +	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
>  		__entry->zone_start,
>  		__entry->migrate_start,
>  		__entry->free_start,
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/5] mm/compaction: enhance tracepoint output for compaction begin/end
  2015-01-12  8:21   ` Joonsoo Kim
@ 2015-01-12 14:32     ` Vlastimil Babka
  -1 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 14:32 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> We now have tracepoint for begin event of compaction and it prints
> start position of both scanners, but, tracepoint for end event of
> compaction doesn't print finish position of both scanners. It'd be
> also useful to know finish position of both scanners so this patch
> add it. It will help to find odd behavior or problem on compaction
> internal logic.
> 
> And, mode is added to both begin/end tracepoint output, since
> according to mode, compaction behavior is quite different.
> 
> And, lastly, status format is changed to string rather than
> status number for readability.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/linux/compaction.h        |    2 ++
>  include/trace/events/compaction.h |   49 ++++++++++++++++++++++++++-----------
>  mm/compaction.c                   |   14 +++++++++--
>  3 files changed, 49 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index 3238ffa..a9547b6 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -12,6 +12,7 @@
>  #define COMPACT_PARTIAL		3
>  /* The full zone was compacted */
>  #define COMPACT_COMPLETE	4
> +/* When adding new state, please change compaction_status_string, too */
>  
>  /* Used to signal whether compaction detected need_sched() or lock contention */
>  /* No contention detected */
> @@ -22,6 +23,7 @@
>  #define COMPACT_CONTENDED_LOCK	2
>  
>  #ifdef CONFIG_COMPACTION
> +extern char *compaction_status_string[];
>  extern int sysctl_compact_memory;
>  extern int sysctl_compaction_handler(struct ctl_table *table, int write,
>  			void __user *buffer, size_t *length, loff_t *ppos);
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 1337d9e..839f6fa 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -85,46 +85,67 @@ TRACE_EVENT(mm_compaction_migratepages,
>  );
>  
>  TRACE_EVENT(mm_compaction_begin,
> -	TP_PROTO(unsigned long zone_start, unsigned long migrate_start,
> -		unsigned long free_start, unsigned long zone_end),
> +	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
> +		unsigned long free_pfn, unsigned long zone_end, bool sync),
>  
> -	TP_ARGS(zone_start, migrate_start, free_start, zone_end),
> +	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync),
>  
>  	TP_STRUCT__entry(
>  		__field(unsigned long, zone_start)
> -		__field(unsigned long, migrate_start)
> -		__field(unsigned long, free_start)
> +		__field(unsigned long, migrate_pfn)
> +		__field(unsigned long, free_pfn)
>  		__field(unsigned long, zone_end)
> +		__field(bool, sync)
>  	),
>  
>  	TP_fast_assign(
>  		__entry->zone_start = zone_start;
> -		__entry->migrate_start = migrate_start;
> -		__entry->free_start = free_start;
> +		__entry->migrate_pfn = migrate_pfn;
> +		__entry->free_pfn = free_pfn;
>  		__entry->zone_end = zone_end;
> +		__entry->sync = sync;
>  	),
>  
> -	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
> +	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s",
>  		__entry->zone_start,
> -		__entry->migrate_start,
> -		__entry->free_start,
> -		__entry->zone_end)
> +		__entry->migrate_pfn,
> +		__entry->free_pfn,
> +		__entry->zone_end,
> +		__entry->sync ? "sync" : "async")
>  );
>  
>  TRACE_EVENT(mm_compaction_end,
> -	TP_PROTO(int status),
> +	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
> +		unsigned long free_pfn, unsigned long zone_end, bool sync,
> +		int status),
>  
> -	TP_ARGS(status),
> +	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync, status),
>  
>  	TP_STRUCT__entry(
> +		__field(unsigned long, zone_start)
> +		__field(unsigned long, migrate_pfn)
> +		__field(unsigned long, free_pfn)
> +		__field(unsigned long, zone_end)
> +		__field(bool, sync)
>  		__field(int, status)
>  	),
>  
>  	TP_fast_assign(
> +		__entry->zone_start = zone_start;
> +		__entry->migrate_pfn = migrate_pfn;
> +		__entry->free_pfn = free_pfn;
> +		__entry->zone_end = zone_end;
> +		__entry->sync = sync;
>  		__entry->status = status;
>  	),
>  
> -	TP_printk("status=%d", __entry->status)
> +	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s status=%s",
> +		__entry->zone_start,
> +		__entry->migrate_pfn,
> +		__entry->free_pfn,
> +		__entry->zone_end,
> +		__entry->sync ? "sync" : "async",
> +		compaction_status_string[__entry->status])
>  );
>  
>  #endif /* _TRACE_COMPACTION_H */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 546e571..2d86a20 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -19,6 +19,14 @@
>  #include "internal.h"
>  
>  #ifdef CONFIG_COMPACTION
> +char *compaction_status_string[] = {
> +	"deferred",
> +	"skipped",
> +	"continue",
> +	"partial",
> +	"complete",
> +};
> +
>  static inline void count_compact_event(enum vm_event_item item)
>  {
>  	count_vm_event(item);
> @@ -1197,7 +1205,8 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
>  		zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn;
>  	}
>  
> -	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn, cc->free_pfn, end_pfn);
> +	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn,
> +				cc->free_pfn, end_pfn, sync);
>  
>  	migrate_prep_local();
>  
> @@ -1299,7 +1308,8 @@ out:
>  			zone->compact_cached_free_pfn = free_pfn;
>  	}
>  
> -	trace_mm_compaction_end(ret);
> +	trace_mm_compaction_end(start_pfn, cc->migrate_pfn,
> +				cc->free_pfn, end_pfn, sync, ret);
>  
>  	return ret;
>  }
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/5] mm/compaction: enhance tracepoint output for compaction begin/end
@ 2015-01-12 14:32     ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 14:32 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> We now have tracepoint for begin event of compaction and it prints
> start position of both scanners, but, tracepoint for end event of
> compaction doesn't print finish position of both scanners. It'd be
> also useful to know finish position of both scanners so this patch
> add it. It will help to find odd behavior or problem on compaction
> internal logic.
> 
> And, mode is added to both begin/end tracepoint output, since
> according to mode, compaction behavior is quite different.
> 
> And, lastly, status format is changed to string rather than
> status number for readability.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/linux/compaction.h        |    2 ++
>  include/trace/events/compaction.h |   49 ++++++++++++++++++++++++++-----------
>  mm/compaction.c                   |   14 +++++++++--
>  3 files changed, 49 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index 3238ffa..a9547b6 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -12,6 +12,7 @@
>  #define COMPACT_PARTIAL		3
>  /* The full zone was compacted */
>  #define COMPACT_COMPLETE	4
> +/* When adding new state, please change compaction_status_string, too */
>  
>  /* Used to signal whether compaction detected need_sched() or lock contention */
>  /* No contention detected */
> @@ -22,6 +23,7 @@
>  #define COMPACT_CONTENDED_LOCK	2
>  
>  #ifdef CONFIG_COMPACTION
> +extern char *compaction_status_string[];
>  extern int sysctl_compact_memory;
>  extern int sysctl_compaction_handler(struct ctl_table *table, int write,
>  			void __user *buffer, size_t *length, loff_t *ppos);
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 1337d9e..839f6fa 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -85,46 +85,67 @@ TRACE_EVENT(mm_compaction_migratepages,
>  );
>  
>  TRACE_EVENT(mm_compaction_begin,
> -	TP_PROTO(unsigned long zone_start, unsigned long migrate_start,
> -		unsigned long free_start, unsigned long zone_end),
> +	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
> +		unsigned long free_pfn, unsigned long zone_end, bool sync),
>  
> -	TP_ARGS(zone_start, migrate_start, free_start, zone_end),
> +	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync),
>  
>  	TP_STRUCT__entry(
>  		__field(unsigned long, zone_start)
> -		__field(unsigned long, migrate_start)
> -		__field(unsigned long, free_start)
> +		__field(unsigned long, migrate_pfn)
> +		__field(unsigned long, free_pfn)
>  		__field(unsigned long, zone_end)
> +		__field(bool, sync)
>  	),
>  
>  	TP_fast_assign(
>  		__entry->zone_start = zone_start;
> -		__entry->migrate_start = migrate_start;
> -		__entry->free_start = free_start;
> +		__entry->migrate_pfn = migrate_pfn;
> +		__entry->free_pfn = free_pfn;
>  		__entry->zone_end = zone_end;
> +		__entry->sync = sync;
>  	),
>  
> -	TP_printk("zone_start=0x%lx migrate_start=0x%lx free_start=0x%lx zone_end=0x%lx",
> +	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s",
>  		__entry->zone_start,
> -		__entry->migrate_start,
> -		__entry->free_start,
> -		__entry->zone_end)
> +		__entry->migrate_pfn,
> +		__entry->free_pfn,
> +		__entry->zone_end,
> +		__entry->sync ? "sync" : "async")
>  );
>  
>  TRACE_EVENT(mm_compaction_end,
> -	TP_PROTO(int status),
> +	TP_PROTO(unsigned long zone_start, unsigned long migrate_pfn,
> +		unsigned long free_pfn, unsigned long zone_end, bool sync,
> +		int status),
>  
> -	TP_ARGS(status),
> +	TP_ARGS(zone_start, migrate_pfn, free_pfn, zone_end, sync, status),
>  
>  	TP_STRUCT__entry(
> +		__field(unsigned long, zone_start)
> +		__field(unsigned long, migrate_pfn)
> +		__field(unsigned long, free_pfn)
> +		__field(unsigned long, zone_end)
> +		__field(bool, sync)
>  		__field(int, status)
>  	),
>  
>  	TP_fast_assign(
> +		__entry->zone_start = zone_start;
> +		__entry->migrate_pfn = migrate_pfn;
> +		__entry->free_pfn = free_pfn;
> +		__entry->zone_end = zone_end;
> +		__entry->sync = sync;
>  		__entry->status = status;
>  	),
>  
> -	TP_printk("status=%d", __entry->status)
> +	TP_printk("zone_start=0x%lx migrate_pfn=0x%lx free_pfn=0x%lx zone_end=0x%lx, mode=%s status=%s",
> +		__entry->zone_start,
> +		__entry->migrate_pfn,
> +		__entry->free_pfn,
> +		__entry->zone_end,
> +		__entry->sync ? "sync" : "async",
> +		compaction_status_string[__entry->status])
>  );
>  
>  #endif /* _TRACE_COMPACTION_H */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 546e571..2d86a20 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -19,6 +19,14 @@
>  #include "internal.h"
>  
>  #ifdef CONFIG_COMPACTION
> +char *compaction_status_string[] = {
> +	"deferred",
> +	"skipped",
> +	"continue",
> +	"partial",
> +	"complete",
> +};
> +
>  static inline void count_compact_event(enum vm_event_item item)
>  {
>  	count_vm_event(item);
> @@ -1197,7 +1205,8 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
>  		zone->compact_cached_migrate_pfn[1] = cc->migrate_pfn;
>  	}
>  
> -	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn, cc->free_pfn, end_pfn);
> +	trace_mm_compaction_begin(start_pfn, cc->migrate_pfn,
> +				cc->free_pfn, end_pfn, sync);
>  
>  	migrate_prep_local();
>  
> @@ -1299,7 +1308,8 @@ out:
>  			zone->compact_cached_free_pfn = free_pfn;
>  	}
>  
> -	trace_mm_compaction_end(ret);
> +	trace_mm_compaction_end(start_pfn, cc->migrate_pfn,
> +				cc->free_pfn, end_pfn, sync, ret);
>  
>  	return ret;
>  }
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 3/5] mm/compaction: print current range where compaction work
  2015-01-12  8:21   ` Joonsoo Kim
@ 2015-01-12 14:34     ` Vlastimil Babka
  -1 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 14:34 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> It'd be useful to know current range where compaction work for detailed
> analysis. With it, we can know pageblock where we actually scan and
> isolate, and, how much pages we try in that pageblock and can guess why
> it doesn't become freepage with pageblock order roughly.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/trace/events/compaction.h |   30 +++++++++++++++++++++++-------
>  mm/compaction.c                   |    9 ++++++---
>  2 files changed, 29 insertions(+), 10 deletions(-)
> 
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 839f6fa..139020b 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -11,39 +11,55 @@
>  
>  DECLARE_EVENT_CLASS(mm_compaction_isolate_template,
>  
> -	TP_PROTO(unsigned long nr_scanned,
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long nr_scanned,
>  		unsigned long nr_taken),
>  
> -	TP_ARGS(nr_scanned, nr_taken),
> +	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken),
>  
>  	TP_STRUCT__entry(
> +		__field(unsigned long, start_pfn)
> +		__field(unsigned long, end_pfn)
>  		__field(unsigned long, nr_scanned)
>  		__field(unsigned long, nr_taken)
>  	),
>  
>  	TP_fast_assign(
> +		__entry->start_pfn = start_pfn;
> +		__entry->end_pfn = end_pfn;
>  		__entry->nr_scanned = nr_scanned;
>  		__entry->nr_taken = nr_taken;
>  	),
>  
> -	TP_printk("nr_scanned=%lu nr_taken=%lu",
> +	TP_printk("range=(0x%lx ~ 0x%lx) nr_scanned=%lu nr_taken=%lu",
> +		__entry->start_pfn,
> +		__entry->end_pfn,
>  		__entry->nr_scanned,
>  		__entry->nr_taken)
>  );
>  
>  DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_migratepages,
>  
> -	TP_PROTO(unsigned long nr_scanned,
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long nr_scanned,
>  		unsigned long nr_taken),
>  
> -	TP_ARGS(nr_scanned, nr_taken)
> +	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
>  );
>  
>  DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_freepages,
> -	TP_PROTO(unsigned long nr_scanned,
> +
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long nr_scanned,
>  		unsigned long nr_taken),
>  
> -	TP_ARGS(nr_scanned, nr_taken)
> +	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
>  );
>  
>  TRACE_EVENT(mm_compaction_migratepages,
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 2d86a20..be28469 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -429,11 +429,12 @@ isolate_fail:
>  
>  	}
>  
> +	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
> +					nr_scanned, total_isolated);
> +
>  	/* Record how far we have got within the block */
>  	*start_pfn = blockpfn;
>  
> -	trace_mm_compaction_isolate_freepages(nr_scanned, total_isolated);
> -
>  	/*
>  	 * If strict isolation is requested by CMA then check that all the
>  	 * pages requested were isolated. If there were any failures, 0 is
> @@ -589,6 +590,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  	unsigned long flags = 0;
>  	bool locked = false;
>  	struct page *page = NULL, *valid_page = NULL;
> +	unsigned long start_pfn = low_pfn;
>  
>  	/*
>  	 * Ensure that there are not too many pages isolated from the LRU
> @@ -749,7 +751,8 @@ isolate_success:
>  	if (low_pfn == end_pfn)
>  		update_pageblock_skip(cc, valid_page, nr_isolated, true);
>  
> -	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
> +	trace_mm_compaction_isolate_migratepages(start_pfn, low_pfn,
> +						nr_scanned, nr_isolated);
>  
>  	count_compact_events(COMPACTMIGRATE_SCANNED, nr_scanned);
>  	if (nr_isolated)
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 3/5] mm/compaction: print current range where compaction work
@ 2015-01-12 14:34     ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 14:34 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> It'd be useful to know current range where compaction work for detailed
> analysis. With it, we can know pageblock where we actually scan and
> isolate, and, how much pages we try in that pageblock and can guess why
> it doesn't become freepage with pageblock order roughly.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  include/trace/events/compaction.h |   30 +++++++++++++++++++++++-------
>  mm/compaction.c                   |    9 ++++++---
>  2 files changed, 29 insertions(+), 10 deletions(-)
> 
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 839f6fa..139020b 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -11,39 +11,55 @@
>  
>  DECLARE_EVENT_CLASS(mm_compaction_isolate_template,
>  
> -	TP_PROTO(unsigned long nr_scanned,
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long nr_scanned,
>  		unsigned long nr_taken),
>  
> -	TP_ARGS(nr_scanned, nr_taken),
> +	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken),
>  
>  	TP_STRUCT__entry(
> +		__field(unsigned long, start_pfn)
> +		__field(unsigned long, end_pfn)
>  		__field(unsigned long, nr_scanned)
>  		__field(unsigned long, nr_taken)
>  	),
>  
>  	TP_fast_assign(
> +		__entry->start_pfn = start_pfn;
> +		__entry->end_pfn = end_pfn;
>  		__entry->nr_scanned = nr_scanned;
>  		__entry->nr_taken = nr_taken;
>  	),
>  
> -	TP_printk("nr_scanned=%lu nr_taken=%lu",
> +	TP_printk("range=(0x%lx ~ 0x%lx) nr_scanned=%lu nr_taken=%lu",
> +		__entry->start_pfn,
> +		__entry->end_pfn,
>  		__entry->nr_scanned,
>  		__entry->nr_taken)
>  );
>  
>  DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_migratepages,
>  
> -	TP_PROTO(unsigned long nr_scanned,
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long nr_scanned,
>  		unsigned long nr_taken),
>  
> -	TP_ARGS(nr_scanned, nr_taken)
> +	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
>  );
>  
>  DEFINE_EVENT(mm_compaction_isolate_template, mm_compaction_isolate_freepages,
> -	TP_PROTO(unsigned long nr_scanned,
> +
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long nr_scanned,
>  		unsigned long nr_taken),
>  
> -	TP_ARGS(nr_scanned, nr_taken)
> +	TP_ARGS(start_pfn, end_pfn, nr_scanned, nr_taken)
>  );
>  
>  TRACE_EVENT(mm_compaction_migratepages,
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 2d86a20..be28469 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -429,11 +429,12 @@ isolate_fail:
>  
>  	}
>  
> +	trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
> +					nr_scanned, total_isolated);
> +
>  	/* Record how far we have got within the block */
>  	*start_pfn = blockpfn;
>  
> -	trace_mm_compaction_isolate_freepages(nr_scanned, total_isolated);
> -
>  	/*
>  	 * If strict isolation is requested by CMA then check that all the
>  	 * pages requested were isolated. If there were any failures, 0 is
> @@ -589,6 +590,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  	unsigned long flags = 0;
>  	bool locked = false;
>  	struct page *page = NULL, *valid_page = NULL;
> +	unsigned long start_pfn = low_pfn;
>  
>  	/*
>  	 * Ensure that there are not too many pages isolated from the LRU
> @@ -749,7 +751,8 @@ isolate_success:
>  	if (low_pfn == end_pfn)
>  		update_pageblock_skip(cc, valid_page, nr_isolated, true);
>  
> -	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
> +	trace_mm_compaction_isolate_migratepages(start_pfn, low_pfn,
> +						nr_scanned, nr_isolated);
>  
>  	count_compact_events(COMPACTMIGRATE_SCANNED, nr_scanned);
>  	if (nr_isolated)
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
  2015-01-12  8:21   ` Joonsoo Kim
@ 2015-01-12 15:53     ` Vlastimil Babka
  -1 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 15:53 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> It is not well analyzed that when/why compaction start/finish or not. With
> these new tracepoints, we can know much more about start/finish reason of
> compaction. I can find following bug with these tracepoint.
> 
> http://www.spinics.net/lists/linux-mm/msg81582.html
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  include/linux/compaction.h        |    3 ++
>  include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
>  mm/compaction.c                   |   41 ++++++++++++++--
>  3 files changed, 134 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index a9547b6..d82181a 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -12,6 +12,9 @@
>  #define COMPACT_PARTIAL		3
>  /* The full zone was compacted */
>  #define COMPACT_COMPLETE	4
> +/* For more detailed tracepoint output */
> +#define COMPACT_NO_SUITABLE_PAGE	5
> +#define COMPACT_NOT_SUITABLE_ZONE	6
>  /* When adding new state, please change compaction_status_string, too */
>  
>  /* Used to signal whether compaction detected need_sched() or lock contention */
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 139020b..839dd4f 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
>  		compaction_status_string[__entry->status])
>  );
>  
> +TRACE_EVENT(mm_compaction_try_to_compact_pages,
> +
> +	TP_PROTO(
> +		int order,
> +		gfp_t gfp_mask,
> +		enum migrate_mode mode,
> +		int alloc_flags,
> +		int classzone_idx),

I wonder if alloc_flags and classzone_idx is particularly useful. It affects the
watermark checks, but those are a bit of blackbox anyway.

> +	TP_ARGS(order, gfp_mask, mode, alloc_flags, classzone_idx),
> +
> +	TP_STRUCT__entry(
> +		__field(int, order)
> +		__field(gfp_t, gfp_mask)
> +		__field(enum migrate_mode, mode)
> +		__field(int, alloc_flags)
> +		__field(int, classzone_idx)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->order = order;
> +		__entry->gfp_mask = gfp_mask;
> +		__entry->mode = mode;
> +		__entry->alloc_flags = alloc_flags;
> +		__entry->classzone_idx = classzone_idx;
> +	),
> +
> +	TP_printk("order=%d gfp_mask=0x%x mode=%d alloc_flags=0x%x classzone_idx=%d",
> +		__entry->order,
> +		__entry->gfp_mask,
> +		(int)__entry->mode,
> +		__entry->alloc_flags,
> +		__entry->classzone_idx)
> +);
> +
> +DECLARE_EVENT_CLASS(mm_compaction_suitable_template,
> +
> +	TP_PROTO(struct zone *zone,
> +		int order,
> +		int alloc_flags,
> +		int classzone_idx,
> +		int ret),
> +
> +	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret),
> +
> +	TP_STRUCT__entry(
> +		__field(int, nid)
> +		__field(char *, name)
> +		__field(int, order)
> +		__field(int, alloc_flags)
> +		__field(int, classzone_idx)
> +		__field(int, ret)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->nid = zone_to_nid(zone);
> +		__entry->name = (char *)zone->name;
> +		__entry->order = order;
> +		__entry->alloc_flags = alloc_flags;
> +		__entry->classzone_idx = classzone_idx;
> +		__entry->ret = ret;
> +	),
> +
> +	TP_printk("node=%d zone=%-8s order=%d alloc_flags=0x%x classzone_idx=%d ret=%s",
> +		__entry->nid,
> +		__entry->name,
> +		__entry->order,
> +		__entry->alloc_flags,
> +		__entry->classzone_idx,
> +		compaction_status_string[__entry->ret])
> +);
> +
> +DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_finished,
> +
> +	TP_PROTO(struct zone *zone,
> +		int order,
> +		int alloc_flags,
> +		int classzone_idx,
> +		int ret),
> +
> +	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
> +);
> +
> +DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
> +
> +	TP_PROTO(struct zone *zone,
> +		int order,
> +		int alloc_flags,
> +		int classzone_idx,
> +		int ret),
> +
> +	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
> +);
> +
>  #endif /* _TRACE_COMPACTION_H */
>  
>  /* This part must be outside protection */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index be28469..7500f01 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -25,6 +25,8 @@ char *compaction_status_string[] = {
>  	"continue",
>  	"partial",
>  	"complete",
> +	"no_suitable_page",
> +	"not_suitable_zone",
>  };
>  
>  static inline void count_compact_event(enum vm_event_item item)
> @@ -1048,7 +1050,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
>  	return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE;
>  }
>  
> -static int compact_finished(struct zone *zone, struct compact_control *cc,
> +static int __compact_finished(struct zone *zone, struct compact_control *cc,
>  			    const int migratetype)
>  {
>  	unsigned int order;
> @@ -1103,7 +1105,21 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
>  			return COMPACT_PARTIAL;
>  	}
>  
> -	return COMPACT_CONTINUE;
> +	return COMPACT_NO_SUITABLE_PAGE;
> +}
> +
> +static int compact_finished(struct zone *zone, struct compact_control *cc,
> +			    const int migratetype)
> +{
> +	int ret;
> +
> +	ret = __compact_finished(zone, cc, migratetype);
> +	trace_mm_compaction_finished(zone, cc->order, cc->alloc_flags,
> +						cc->classzone_idx, ret);
> +	if (ret == COMPACT_NO_SUITABLE_PAGE)
> +		ret = COMPACT_CONTINUE;
> +
> +	return ret;
>  }
>  
>  /*
> @@ -1113,7 +1129,7 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
>   *   COMPACT_PARTIAL  - If the allocation would succeed without compaction
>   *   COMPACT_CONTINUE - If compaction should run now
>   */
> -unsigned long compaction_suitable(struct zone *zone, int order,
> +static unsigned long __compaction_suitable(struct zone *zone, int order,
>  					int alloc_flags, int classzone_idx)
>  {
>  	int fragindex;
> @@ -1157,11 +1173,25 @@ unsigned long compaction_suitable(struct zone *zone, int order,
>  	 */
>  	fragindex = fragmentation_index(zone, order);
>  	if (fragindex >= 0 && fragindex <= sysctl_extfrag_threshold)
> -		return COMPACT_SKIPPED;
> +		return COMPACT_NOT_SUITABLE_ZONE;
>  
>  	return COMPACT_CONTINUE;
>  }
>  
> +unsigned long compaction_suitable(struct zone *zone, int order,
> +					int alloc_flags, int classzone_idx)
> +{
> +	unsigned long ret;
> +
> +	ret = __compaction_suitable(zone, order, alloc_flags, classzone_idx);
> +	trace_mm_compaction_suitable(zone, order, alloc_flags,
> +						classzone_idx, ret);
> +	if (ret == COMPACT_NOT_SUITABLE_ZONE)
> +		ret = COMPACT_SKIPPED;
> +
> +	return ret;
> +}
> +
>  static int compact_zone(struct zone *zone, struct compact_control *cc)
>  {
>  	int ret;
> @@ -1377,6 +1407,9 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
>  	if (!order || !may_enter_fs || !may_perform_io)
>  		return COMPACT_SKIPPED;
>  
> +	trace_mm_compaction_try_to_compact_pages(order, gfp_mask, mode,
> +					alloc_flags, classzone_idx);
> +
>  	/* Compact each zone in the list */
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, high_zoneidx,
>  								nodemask) {
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
@ 2015-01-12 15:53     ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 15:53 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> It is not well analyzed that when/why compaction start/finish or not. With
> these new tracepoints, we can know much more about start/finish reason of
> compaction. I can find following bug with these tracepoint.
> 
> http://www.spinics.net/lists/linux-mm/msg81582.html
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  include/linux/compaction.h        |    3 ++
>  include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
>  mm/compaction.c                   |   41 ++++++++++++++--
>  3 files changed, 134 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index a9547b6..d82181a 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -12,6 +12,9 @@
>  #define COMPACT_PARTIAL		3
>  /* The full zone was compacted */
>  #define COMPACT_COMPLETE	4
> +/* For more detailed tracepoint output */
> +#define COMPACT_NO_SUITABLE_PAGE	5
> +#define COMPACT_NOT_SUITABLE_ZONE	6
>  /* When adding new state, please change compaction_status_string, too */
>  
>  /* Used to signal whether compaction detected need_sched() or lock contention */
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 139020b..839dd4f 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
>  		compaction_status_string[__entry->status])
>  );
>  
> +TRACE_EVENT(mm_compaction_try_to_compact_pages,
> +
> +	TP_PROTO(
> +		int order,
> +		gfp_t gfp_mask,
> +		enum migrate_mode mode,
> +		int alloc_flags,
> +		int classzone_idx),

I wonder if alloc_flags and classzone_idx is particularly useful. It affects the
watermark checks, but those are a bit of blackbox anyway.

> +	TP_ARGS(order, gfp_mask, mode, alloc_flags, classzone_idx),
> +
> +	TP_STRUCT__entry(
> +		__field(int, order)
> +		__field(gfp_t, gfp_mask)
> +		__field(enum migrate_mode, mode)
> +		__field(int, alloc_flags)
> +		__field(int, classzone_idx)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->order = order;
> +		__entry->gfp_mask = gfp_mask;
> +		__entry->mode = mode;
> +		__entry->alloc_flags = alloc_flags;
> +		__entry->classzone_idx = classzone_idx;
> +	),
> +
> +	TP_printk("order=%d gfp_mask=0x%x mode=%d alloc_flags=0x%x classzone_idx=%d",
> +		__entry->order,
> +		__entry->gfp_mask,
> +		(int)__entry->mode,
> +		__entry->alloc_flags,
> +		__entry->classzone_idx)
> +);
> +
> +DECLARE_EVENT_CLASS(mm_compaction_suitable_template,
> +
> +	TP_PROTO(struct zone *zone,
> +		int order,
> +		int alloc_flags,
> +		int classzone_idx,
> +		int ret),
> +
> +	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret),
> +
> +	TP_STRUCT__entry(
> +		__field(int, nid)
> +		__field(char *, name)
> +		__field(int, order)
> +		__field(int, alloc_flags)
> +		__field(int, classzone_idx)
> +		__field(int, ret)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->nid = zone_to_nid(zone);
> +		__entry->name = (char *)zone->name;
> +		__entry->order = order;
> +		__entry->alloc_flags = alloc_flags;
> +		__entry->classzone_idx = classzone_idx;
> +		__entry->ret = ret;
> +	),
> +
> +	TP_printk("node=%d zone=%-8s order=%d alloc_flags=0x%x classzone_idx=%d ret=%s",
> +		__entry->nid,
> +		__entry->name,
> +		__entry->order,
> +		__entry->alloc_flags,
> +		__entry->classzone_idx,
> +		compaction_status_string[__entry->ret])
> +);
> +
> +DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_finished,
> +
> +	TP_PROTO(struct zone *zone,
> +		int order,
> +		int alloc_flags,
> +		int classzone_idx,
> +		int ret),
> +
> +	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
> +);
> +
> +DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
> +
> +	TP_PROTO(struct zone *zone,
> +		int order,
> +		int alloc_flags,
> +		int classzone_idx,
> +		int ret),
> +
> +	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
> +);
> +
>  #endif /* _TRACE_COMPACTION_H */
>  
>  /* This part must be outside protection */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index be28469..7500f01 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -25,6 +25,8 @@ char *compaction_status_string[] = {
>  	"continue",
>  	"partial",
>  	"complete",
> +	"no_suitable_page",
> +	"not_suitable_zone",
>  };
>  
>  static inline void count_compact_event(enum vm_event_item item)
> @@ -1048,7 +1050,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
>  	return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE;
>  }
>  
> -static int compact_finished(struct zone *zone, struct compact_control *cc,
> +static int __compact_finished(struct zone *zone, struct compact_control *cc,
>  			    const int migratetype)
>  {
>  	unsigned int order;
> @@ -1103,7 +1105,21 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
>  			return COMPACT_PARTIAL;
>  	}
>  
> -	return COMPACT_CONTINUE;
> +	return COMPACT_NO_SUITABLE_PAGE;
> +}
> +
> +static int compact_finished(struct zone *zone, struct compact_control *cc,
> +			    const int migratetype)
> +{
> +	int ret;
> +
> +	ret = __compact_finished(zone, cc, migratetype);
> +	trace_mm_compaction_finished(zone, cc->order, cc->alloc_flags,
> +						cc->classzone_idx, ret);
> +	if (ret == COMPACT_NO_SUITABLE_PAGE)
> +		ret = COMPACT_CONTINUE;
> +
> +	return ret;
>  }
>  
>  /*
> @@ -1113,7 +1129,7 @@ static int compact_finished(struct zone *zone, struct compact_control *cc,
>   *   COMPACT_PARTIAL  - If the allocation would succeed without compaction
>   *   COMPACT_CONTINUE - If compaction should run now
>   */
> -unsigned long compaction_suitable(struct zone *zone, int order,
> +static unsigned long __compaction_suitable(struct zone *zone, int order,
>  					int alloc_flags, int classzone_idx)
>  {
>  	int fragindex;
> @@ -1157,11 +1173,25 @@ unsigned long compaction_suitable(struct zone *zone, int order,
>  	 */
>  	fragindex = fragmentation_index(zone, order);
>  	if (fragindex >= 0 && fragindex <= sysctl_extfrag_threshold)
> -		return COMPACT_SKIPPED;
> +		return COMPACT_NOT_SUITABLE_ZONE;
>  
>  	return COMPACT_CONTINUE;
>  }
>  
> +unsigned long compaction_suitable(struct zone *zone, int order,
> +					int alloc_flags, int classzone_idx)
> +{
> +	unsigned long ret;
> +
> +	ret = __compaction_suitable(zone, order, alloc_flags, classzone_idx);
> +	trace_mm_compaction_suitable(zone, order, alloc_flags,
> +						classzone_idx, ret);
> +	if (ret == COMPACT_NOT_SUITABLE_ZONE)
> +		ret = COMPACT_SKIPPED;
> +
> +	return ret;
> +}
> +
>  static int compact_zone(struct zone *zone, struct compact_control *cc)
>  {
>  	int ret;
> @@ -1377,6 +1407,9 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
>  	if (!order || !may_enter_fs || !may_perform_io)
>  		return COMPACT_SKIPPED;
>  
> +	trace_mm_compaction_try_to_compact_pages(order, gfp_mask, mode,
> +					alloc_flags, classzone_idx);
> +
>  	/* Compact each zone in the list */
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, high_zoneidx,
>  								nodemask) {
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
  2015-01-12  8:21   ` Joonsoo Kim
@ 2015-01-12 16:35     ` Vlastimil Babka
  -1 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 16:35 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> compaction deferring logic is heavy hammer that block the way to
> the compaction. It doesn't consider overall system state, so it
> could prevent user from doing compaction falsely. In other words,
> even if system has enough range of memory to compact, compaction would be
> skipped due to compaction deferring logic. This patch add new tracepoint
> to understand work of deferring logic. This will also help to check
> compaction success and fail.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  include/linux/compaction.h        |   65 +++------------------------------
>  include/trace/events/compaction.h |   55 ++++++++++++++++++++++++++++
>  mm/compaction.c                   |   72 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 132 insertions(+), 60 deletions(-)
> 
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index d82181a..026ff64 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -44,66 +44,11 @@ extern void reset_isolation_suitable(pg_data_t *pgdat);
>  extern unsigned long compaction_suitable(struct zone *zone, int order,
>  					int alloc_flags, int classzone_idx);
>  
> -/* Do not skip compaction more than 64 times */
> -#define COMPACT_MAX_DEFER_SHIFT 6
> -
> -/*
> - * Compaction is deferred when compaction fails to result in a page
> - * allocation success. 1 << compact_defer_limit compactions are skipped up
> - * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> - */
> -static inline void defer_compaction(struct zone *zone, int order)
> -{
> -	zone->compact_considered = 0;
> -	zone->compact_defer_shift++;
> -
> -	if (order < zone->compact_order_failed)
> -		zone->compact_order_failed = order;
> -
> -	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> -		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> -}
> -
> -/* Returns true if compaction should be skipped this time */
> -static inline bool compaction_deferred(struct zone *zone, int order)
> -{
> -	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> -
> -	if (order < zone->compact_order_failed)
> -		return false;
> -
> -	/* Avoid possible overflow */
> -	if (++zone->compact_considered > defer_limit)
> -		zone->compact_considered = defer_limit;
> -
> -	return zone->compact_considered < defer_limit;
> -}
> -
> -/*
> - * Update defer tracking counters after successful compaction of given order,
> - * which means an allocation either succeeded (alloc_success == true) or is
> - * expected to succeed.
> - */
> -static inline void compaction_defer_reset(struct zone *zone, int order,
> -		bool alloc_success)
> -{
> -	if (alloc_success) {
> -		zone->compact_considered = 0;
> -		zone->compact_defer_shift = 0;
> -	}
> -	if (order >= zone->compact_order_failed)
> -		zone->compact_order_failed = order + 1;
> -}
> -
> -/* Returns true if restarting compaction after many failures */
> -static inline bool compaction_restarting(struct zone *zone, int order)
> -{
> -	if (order < zone->compact_order_failed)
> -		return false;
> -
> -	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
> -		zone->compact_considered >= 1UL << zone->compact_defer_shift;
> -}
> +extern void defer_compaction(struct zone *zone, int order);
> +extern bool compaction_deferred(struct zone *zone, int order);
> +extern void compaction_defer_reset(struct zone *zone, int order,
> +				bool alloc_success);
> +extern bool compaction_restarting(struct zone *zone, int order);
>  
>  #else
>  static inline unsigned long try_to_compact_pages(struct zonelist *zonelist,
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 839dd4f..f879f41 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -258,6 +258,61 @@ DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
>  	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
>  );
>  
> +DECLARE_EVENT_CLASS(mm_compaction_defer_template,
> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order),
> +
> +	TP_STRUCT__entry(
> +		__field(int, nid)
> +		__field(char *, name)
> +		__field(int, order)
> +		__field(unsigned int, considered)
> +		__field(unsigned int, defer_shift)
> +		__field(int, order_failed)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->nid = zone_to_nid(zone);
> +		__entry->name = (char *)zone->name;
> +		__entry->order = order;
> +		__entry->considered = zone->compact_considered;
> +		__entry->defer_shift = zone->compact_defer_shift;
> +		__entry->order_failed = zone->compact_order_failed;
> +	),
> +
> +	TP_printk("node=%d zone=%-8s order=%d order_failed=%d reason=%s consider=%u limit=%lu",
> +		__entry->nid,
> +		__entry->name,
> +		__entry->order,
> +		__entry->order_failed,
> +		__entry->order < __entry->order_failed ? "order" : "try",

This "reason" only makes sense for compaction_deferred, no? And "order" would
never be printed there anyway, because of bug below. Also it's quite trivial to
derive from the other data printed, so I would just remove it.

> +		__entry->considered,
> +		1UL << __entry->defer_shift)
> +);
> +
> +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_deffered,

                                                            _deferred

> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order)
> +);
> +
> +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_compaction,
> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order)
> +);
> +
> +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_reset,
> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order)
> +);
> +
>  #endif /* _TRACE_COMPACTION_H */
>  
>  /* This part must be outside protection */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 7500f01..7aa4249 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -123,6 +123,77 @@ static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
>  }
>  
>  #ifdef CONFIG_COMPACTION
> +
> +/* Do not skip compaction more than 64 times */
> +#define COMPACT_MAX_DEFER_SHIFT 6
> +
> +/*
> + * Compaction is deferred when compaction fails to result in a page
> + * allocation success. 1 << compact_defer_limit compactions are skipped up
> + * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> + */
> +void defer_compaction(struct zone *zone, int order)
> +{
> +	zone->compact_considered = 0;
> +	zone->compact_defer_shift++;
> +
> +	if (order < zone->compact_order_failed)
> +		zone->compact_order_failed = order;
> +
> +	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> +		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> +
> +	trace_mm_compaction_defer_compaction(zone, order);
> +}
> +
> +/* Returns true if compaction should be skipped this time */
> +bool compaction_deferred(struct zone *zone, int order)
> +{
> +	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> +
> +	if (order < zone->compact_order_failed)

- no tracepoint (with reason="order") in this case?

> +		return false;
> +
> +	/* Avoid possible overflow */
> +	if (++zone->compact_considered > defer_limit)
> +		zone->compact_considered = defer_limit;
> +
> +	if (zone->compact_considered >= defer_limit)

- no tracepoint here as well? Oh did you want to trace just when it's true? That
makes sense, but then just remove the reason part.

Hm what if we avoided dirtying the cache line in the non-deferred case? Would be
simpler, too?

if (zone->compact_considered + 1 >= defer_limit)
     return false;

zone->compact_considered++;

trace_mm_compaction_defer_compaction(zone, order);

return true;

> +		return false;
> +
> +	trace_mm_compaction_deffered(zone, order);
> +
> +	return true;
> +}
> +
> +/*
> + * Update defer tracking counters after successful compaction of given order,
> + * which means an allocation either succeeded (alloc_success == true) or is
> + * expected to succeed.
> + */
> +void compaction_defer_reset(struct zone *zone, int order,
> +		bool alloc_success)
> +{
> +	if (alloc_success) {
> +		zone->compact_considered = 0;
> +		zone->compact_defer_shift = 0;
> +	}
> +	if (order >= zone->compact_order_failed)
> +		zone->compact_order_failed = order + 1;
> +
> +	trace_mm_compaction_defer_reset(zone, order);
> +}
> +
> +/* Returns true if restarting compaction after many failures */
> +bool compaction_restarting(struct zone *zone, int order)
> +{
> +	if (order < zone->compact_order_failed)
> +		return false;
> +
> +	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
> +		zone->compact_considered >= 1UL << zone->compact_defer_shift;
> +}
> +
>  /* Returns true if the pageblock should be scanned for pages to isolate. */
>  static inline bool isolation_suitable(struct compact_control *cc,
>  					struct page *page)
> @@ -1438,6 +1509,7 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
>  			 * succeeds in this zone.
>  			 */
>  			compaction_defer_reset(zone, order, false);
> +
>  			/*
>  			 * It is possible that async compaction aborted due to
>  			 * need_resched() and the watermarks were ok thanks to
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
@ 2015-01-12 16:35     ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-12 16:35 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> compaction deferring logic is heavy hammer that block the way to
> the compaction. It doesn't consider overall system state, so it
> could prevent user from doing compaction falsely. In other words,
> even if system has enough range of memory to compact, compaction would be
> skipped due to compaction deferring logic. This patch add new tracepoint
> to understand work of deferring logic. This will also help to check
> compaction success and fail.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  include/linux/compaction.h        |   65 +++------------------------------
>  include/trace/events/compaction.h |   55 ++++++++++++++++++++++++++++
>  mm/compaction.c                   |   72 +++++++++++++++++++++++++++++++++++++
>  3 files changed, 132 insertions(+), 60 deletions(-)
> 
> diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> index d82181a..026ff64 100644
> --- a/include/linux/compaction.h
> +++ b/include/linux/compaction.h
> @@ -44,66 +44,11 @@ extern void reset_isolation_suitable(pg_data_t *pgdat);
>  extern unsigned long compaction_suitable(struct zone *zone, int order,
>  					int alloc_flags, int classzone_idx);
>  
> -/* Do not skip compaction more than 64 times */
> -#define COMPACT_MAX_DEFER_SHIFT 6
> -
> -/*
> - * Compaction is deferred when compaction fails to result in a page
> - * allocation success. 1 << compact_defer_limit compactions are skipped up
> - * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> - */
> -static inline void defer_compaction(struct zone *zone, int order)
> -{
> -	zone->compact_considered = 0;
> -	zone->compact_defer_shift++;
> -
> -	if (order < zone->compact_order_failed)
> -		zone->compact_order_failed = order;
> -
> -	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> -		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> -}
> -
> -/* Returns true if compaction should be skipped this time */
> -static inline bool compaction_deferred(struct zone *zone, int order)
> -{
> -	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> -
> -	if (order < zone->compact_order_failed)
> -		return false;
> -
> -	/* Avoid possible overflow */
> -	if (++zone->compact_considered > defer_limit)
> -		zone->compact_considered = defer_limit;
> -
> -	return zone->compact_considered < defer_limit;
> -}
> -
> -/*
> - * Update defer tracking counters after successful compaction of given order,
> - * which means an allocation either succeeded (alloc_success == true) or is
> - * expected to succeed.
> - */
> -static inline void compaction_defer_reset(struct zone *zone, int order,
> -		bool alloc_success)
> -{
> -	if (alloc_success) {
> -		zone->compact_considered = 0;
> -		zone->compact_defer_shift = 0;
> -	}
> -	if (order >= zone->compact_order_failed)
> -		zone->compact_order_failed = order + 1;
> -}
> -
> -/* Returns true if restarting compaction after many failures */
> -static inline bool compaction_restarting(struct zone *zone, int order)
> -{
> -	if (order < zone->compact_order_failed)
> -		return false;
> -
> -	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
> -		zone->compact_considered >= 1UL << zone->compact_defer_shift;
> -}
> +extern void defer_compaction(struct zone *zone, int order);
> +extern bool compaction_deferred(struct zone *zone, int order);
> +extern void compaction_defer_reset(struct zone *zone, int order,
> +				bool alloc_success);
> +extern bool compaction_restarting(struct zone *zone, int order);
>  
>  #else
>  static inline unsigned long try_to_compact_pages(struct zonelist *zonelist,
> diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> index 839dd4f..f879f41 100644
> --- a/include/trace/events/compaction.h
> +++ b/include/trace/events/compaction.h
> @@ -258,6 +258,61 @@ DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
>  	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
>  );
>  
> +DECLARE_EVENT_CLASS(mm_compaction_defer_template,
> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order),
> +
> +	TP_STRUCT__entry(
> +		__field(int, nid)
> +		__field(char *, name)
> +		__field(int, order)
> +		__field(unsigned int, considered)
> +		__field(unsigned int, defer_shift)
> +		__field(int, order_failed)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->nid = zone_to_nid(zone);
> +		__entry->name = (char *)zone->name;
> +		__entry->order = order;
> +		__entry->considered = zone->compact_considered;
> +		__entry->defer_shift = zone->compact_defer_shift;
> +		__entry->order_failed = zone->compact_order_failed;
> +	),
> +
> +	TP_printk("node=%d zone=%-8s order=%d order_failed=%d reason=%s consider=%u limit=%lu",
> +		__entry->nid,
> +		__entry->name,
> +		__entry->order,
> +		__entry->order_failed,
> +		__entry->order < __entry->order_failed ? "order" : "try",

This "reason" only makes sense for compaction_deferred, no? And "order" would
never be printed there anyway, because of bug below. Also it's quite trivial to
derive from the other data printed, so I would just remove it.

> +		__entry->considered,
> +		1UL << __entry->defer_shift)
> +);
> +
> +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_deffered,

                                                            _deferred

> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order)
> +);
> +
> +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_compaction,
> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order)
> +);
> +
> +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_reset,
> +
> +	TP_PROTO(struct zone *zone, int order),
> +
> +	TP_ARGS(zone, order)
> +);
> +
>  #endif /* _TRACE_COMPACTION_H */
>  
>  /* This part must be outside protection */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 7500f01..7aa4249 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -123,6 +123,77 @@ static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
>  }
>  
>  #ifdef CONFIG_COMPACTION
> +
> +/* Do not skip compaction more than 64 times */
> +#define COMPACT_MAX_DEFER_SHIFT 6
> +
> +/*
> + * Compaction is deferred when compaction fails to result in a page
> + * allocation success. 1 << compact_defer_limit compactions are skipped up
> + * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> + */
> +void defer_compaction(struct zone *zone, int order)
> +{
> +	zone->compact_considered = 0;
> +	zone->compact_defer_shift++;
> +
> +	if (order < zone->compact_order_failed)
> +		zone->compact_order_failed = order;
> +
> +	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> +		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> +
> +	trace_mm_compaction_defer_compaction(zone, order);
> +}
> +
> +/* Returns true if compaction should be skipped this time */
> +bool compaction_deferred(struct zone *zone, int order)
> +{
> +	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> +
> +	if (order < zone->compact_order_failed)

- no tracepoint (with reason="order") in this case?

> +		return false;
> +
> +	/* Avoid possible overflow */
> +	if (++zone->compact_considered > defer_limit)
> +		zone->compact_considered = defer_limit;
> +
> +	if (zone->compact_considered >= defer_limit)

- no tracepoint here as well? Oh did you want to trace just when it's true? That
makes sense, but then just remove the reason part.

Hm what if we avoided dirtying the cache line in the non-deferred case? Would be
simpler, too?

if (zone->compact_considered + 1 >= defer_limit)
     return false;

zone->compact_considered++;

trace_mm_compaction_defer_compaction(zone, order);

return true;

> +		return false;
> +
> +	trace_mm_compaction_deffered(zone, order);
> +
> +	return true;
> +}
> +
> +/*
> + * Update defer tracking counters after successful compaction of given order,
> + * which means an allocation either succeeded (alloc_success == true) or is
> + * expected to succeed.
> + */
> +void compaction_defer_reset(struct zone *zone, int order,
> +		bool alloc_success)
> +{
> +	if (alloc_success) {
> +		zone->compact_considered = 0;
> +		zone->compact_defer_shift = 0;
> +	}
> +	if (order >= zone->compact_order_failed)
> +		zone->compact_order_failed = order + 1;
> +
> +	trace_mm_compaction_defer_reset(zone, order);
> +}
> +
> +/* Returns true if restarting compaction after many failures */
> +bool compaction_restarting(struct zone *zone, int order)
> +{
> +	if (order < zone->compact_order_failed)
> +		return false;
> +
> +	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
> +		zone->compact_considered >= 1UL << zone->compact_defer_shift;
> +}
> +
>  /* Returns true if the pageblock should be scanned for pages to isolate. */
>  static inline bool isolation_suitable(struct compact_control *cc,
>  					struct page *page)
> @@ -1438,6 +1509,7 @@ unsigned long try_to_compact_pages(struct zonelist *zonelist,
>  			 * succeeds in this zone.
>  			 */
>  			compaction_defer_reset(zone, order, false);
> +
>  			/*
>  			 * It is possible that async compaction aborted due to
>  			 * need_resched() and the watermarks were ok thanks to
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
  2015-01-12 15:53     ` Vlastimil Babka
@ 2015-01-13  7:16       ` Joonsoo Kim
  -1 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-13  7:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On Mon, Jan 12, 2015 at 04:53:53PM +0100, Vlastimil Babka wrote:
> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> > It is not well analyzed that when/why compaction start/finish or not. With
> > these new tracepoints, we can know much more about start/finish reason of
> > compaction. I can find following bug with these tracepoint.
> > 
> > http://www.spinics.net/lists/linux-mm/msg81582.html
> > 
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > ---
> >  include/linux/compaction.h        |    3 ++
> >  include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
> >  mm/compaction.c                   |   41 ++++++++++++++--
> >  3 files changed, 134 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> > index a9547b6..d82181a 100644
> > --- a/include/linux/compaction.h
> > +++ b/include/linux/compaction.h
> > @@ -12,6 +12,9 @@
> >  #define COMPACT_PARTIAL		3
> >  /* The full zone was compacted */
> >  #define COMPACT_COMPLETE	4
> > +/* For more detailed tracepoint output */
> > +#define COMPACT_NO_SUITABLE_PAGE	5
> > +#define COMPACT_NOT_SUITABLE_ZONE	6
> >  /* When adding new state, please change compaction_status_string, too */
> >  
> >  /* Used to signal whether compaction detected need_sched() or lock contention */
> > diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> > index 139020b..839dd4f 100644
> > --- a/include/trace/events/compaction.h
> > +++ b/include/trace/events/compaction.h
> > @@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
> >  		compaction_status_string[__entry->status])
> >  );
> >  
> > +TRACE_EVENT(mm_compaction_try_to_compact_pages,
> > +
> > +	TP_PROTO(
> > +		int order,
> > +		gfp_t gfp_mask,
> > +		enum migrate_mode mode,
> > +		int alloc_flags,
> > +		int classzone_idx),
> 
> I wonder if alloc_flags and classzone_idx is particularly useful. It affects the
> watermark checks, but those are a bit of blackbox anyway.

Yes, I think so. How about printing gfp_flag rather than these? It would
tell us migratetype and other information so would be useful.

Thanks.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
@ 2015-01-13  7:16       ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-13  7:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On Mon, Jan 12, 2015 at 04:53:53PM +0100, Vlastimil Babka wrote:
> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> > It is not well analyzed that when/why compaction start/finish or not. With
> > these new tracepoints, we can know much more about start/finish reason of
> > compaction. I can find following bug with these tracepoint.
> > 
> > http://www.spinics.net/lists/linux-mm/msg81582.html
> > 
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > ---
> >  include/linux/compaction.h        |    3 ++
> >  include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
> >  mm/compaction.c                   |   41 ++++++++++++++--
> >  3 files changed, 134 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> > index a9547b6..d82181a 100644
> > --- a/include/linux/compaction.h
> > +++ b/include/linux/compaction.h
> > @@ -12,6 +12,9 @@
> >  #define COMPACT_PARTIAL		3
> >  /* The full zone was compacted */
> >  #define COMPACT_COMPLETE	4
> > +/* For more detailed tracepoint output */
> > +#define COMPACT_NO_SUITABLE_PAGE	5
> > +#define COMPACT_NOT_SUITABLE_ZONE	6
> >  /* When adding new state, please change compaction_status_string, too */
> >  
> >  /* Used to signal whether compaction detected need_sched() or lock contention */
> > diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> > index 139020b..839dd4f 100644
> > --- a/include/trace/events/compaction.h
> > +++ b/include/trace/events/compaction.h
> > @@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
> >  		compaction_status_string[__entry->status])
> >  );
> >  
> > +TRACE_EVENT(mm_compaction_try_to_compact_pages,
> > +
> > +	TP_PROTO(
> > +		int order,
> > +		gfp_t gfp_mask,
> > +		enum migrate_mode mode,
> > +		int alloc_flags,
> > +		int classzone_idx),
> 
> I wonder if alloc_flags and classzone_idx is particularly useful. It affects the
> watermark checks, but those are a bit of blackbox anyway.

Yes, I think so. How about printing gfp_flag rather than these? It would
tell us migratetype and other information so would be useful.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
  2015-01-12 16:35     ` Vlastimil Babka
@ 2015-01-13  7:18       ` Joonsoo Kim
  -1 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-13  7:18 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On Mon, Jan 12, 2015 at 05:35:47PM +0100, Vlastimil Babka wrote:
> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> > compaction deferring logic is heavy hammer that block the way to
> > the compaction. It doesn't consider overall system state, so it
> > could prevent user from doing compaction falsely. In other words,
> > even if system has enough range of memory to compact, compaction would be
> > skipped due to compaction deferring logic. This patch add new tracepoint
> > to understand work of deferring logic. This will also help to check
> > compaction success and fail.
> > 
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > ---
> >  include/linux/compaction.h        |   65 +++------------------------------
> >  include/trace/events/compaction.h |   55 ++++++++++++++++++++++++++++
> >  mm/compaction.c                   |   72 +++++++++++++++++++++++++++++++++++++
> >  3 files changed, 132 insertions(+), 60 deletions(-)
> > 
> > diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> > index d82181a..026ff64 100644
> > --- a/include/linux/compaction.h
> > +++ b/include/linux/compaction.h
> > @@ -44,66 +44,11 @@ extern void reset_isolation_suitable(pg_data_t *pgdat);
> >  extern unsigned long compaction_suitable(struct zone *zone, int order,
> >  					int alloc_flags, int classzone_idx);
> >  
> > -/* Do not skip compaction more than 64 times */
> > -#define COMPACT_MAX_DEFER_SHIFT 6
> > -
> > -/*
> > - * Compaction is deferred when compaction fails to result in a page
> > - * allocation success. 1 << compact_defer_limit compactions are skipped up
> > - * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> > - */
> > -static inline void defer_compaction(struct zone *zone, int order)
> > -{
> > -	zone->compact_considered = 0;
> > -	zone->compact_defer_shift++;
> > -
> > -	if (order < zone->compact_order_failed)
> > -		zone->compact_order_failed = order;
> > -
> > -	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> > -		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> > -}
> > -
> > -/* Returns true if compaction should be skipped this time */
> > -static inline bool compaction_deferred(struct zone *zone, int order)
> > -{
> > -	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> > -
> > -	if (order < zone->compact_order_failed)
> > -		return false;
> > -
> > -	/* Avoid possible overflow */
> > -	if (++zone->compact_considered > defer_limit)
> > -		zone->compact_considered = defer_limit;
> > -
> > -	return zone->compact_considered < defer_limit;
> > -}
> > -
> > -/*
> > - * Update defer tracking counters after successful compaction of given order,
> > - * which means an allocation either succeeded (alloc_success == true) or is
> > - * expected to succeed.
> > - */
> > -static inline void compaction_defer_reset(struct zone *zone, int order,
> > -		bool alloc_success)
> > -{
> > -	if (alloc_success) {
> > -		zone->compact_considered = 0;
> > -		zone->compact_defer_shift = 0;
> > -	}
> > -	if (order >= zone->compact_order_failed)
> > -		zone->compact_order_failed = order + 1;
> > -}
> > -
> > -/* Returns true if restarting compaction after many failures */
> > -static inline bool compaction_restarting(struct zone *zone, int order)
> > -{
> > -	if (order < zone->compact_order_failed)
> > -		return false;
> > -
> > -	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
> > -		zone->compact_considered >= 1UL << zone->compact_defer_shift;
> > -}
> > +extern void defer_compaction(struct zone *zone, int order);
> > +extern bool compaction_deferred(struct zone *zone, int order);
> > +extern void compaction_defer_reset(struct zone *zone, int order,
> > +				bool alloc_success);
> > +extern bool compaction_restarting(struct zone *zone, int order);
> >  
> >  #else
> >  static inline unsigned long try_to_compact_pages(struct zonelist *zonelist,
> > diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> > index 839dd4f..f879f41 100644
> > --- a/include/trace/events/compaction.h
> > +++ b/include/trace/events/compaction.h
> > @@ -258,6 +258,61 @@ DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
> >  	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
> >  );
> >  
> > +DECLARE_EVENT_CLASS(mm_compaction_defer_template,
> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order),
> > +
> > +	TP_STRUCT__entry(
> > +		__field(int, nid)
> > +		__field(char *, name)
> > +		__field(int, order)
> > +		__field(unsigned int, considered)
> > +		__field(unsigned int, defer_shift)
> > +		__field(int, order_failed)
> > +	),
> > +
> > +	TP_fast_assign(
> > +		__entry->nid = zone_to_nid(zone);
> > +		__entry->name = (char *)zone->name;
> > +		__entry->order = order;
> > +		__entry->considered = zone->compact_considered;
> > +		__entry->defer_shift = zone->compact_defer_shift;
> > +		__entry->order_failed = zone->compact_order_failed;
> > +	),
> > +
> > +	TP_printk("node=%d zone=%-8s order=%d order_failed=%d reason=%s consider=%u limit=%lu",
> > +		__entry->nid,
> > +		__entry->name,
> > +		__entry->order,
> > +		__entry->order_failed,
> > +		__entry->order < __entry->order_failed ? "order" : "try",
> 
> This "reason" only makes sense for compaction_deferred, no? And "order" would
> never be printed there anyway, because of bug below. Also it's quite trivial to
> derive from the other data printed, so I would just remove it.

Will remove.

> 
> > +		__entry->considered,
> > +		1UL << __entry->defer_shift)
> > +);
> > +
> > +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_deffered,
> 
>                                                             _deferred

Okay.

> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order)
> > +);
> > +
> > +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_compaction,
> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order)
> > +);
> > +
> > +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_reset,
> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order)
> > +);
> > +
> >  #endif /* _TRACE_COMPACTION_H */
> >  
> >  /* This part must be outside protection */
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 7500f01..7aa4249 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -123,6 +123,77 @@ static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
> >  }
> >  
> >  #ifdef CONFIG_COMPACTION
> > +
> > +/* Do not skip compaction more than 64 times */
> > +#define COMPACT_MAX_DEFER_SHIFT 6
> > +
> > +/*
> > + * Compaction is deferred when compaction fails to result in a page
> > + * allocation success. 1 << compact_defer_limit compactions are skipped up
> > + * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> > + */
> > +void defer_compaction(struct zone *zone, int order)
> > +{
> > +	zone->compact_considered = 0;
> > +	zone->compact_defer_shift++;
> > +
> > +	if (order < zone->compact_order_failed)
> > +		zone->compact_order_failed = order;
> > +
> > +	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> > +		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> > +
> > +	trace_mm_compaction_defer_compaction(zone, order);
> > +}
> > +
> > +/* Returns true if compaction should be skipped this time */
> > +bool compaction_deferred(struct zone *zone, int order)
> > +{
> > +	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> > +
> > +	if (order < zone->compact_order_failed)
> 
> - no tracepoint (with reason="order") in this case?
> 
> > +		return false;
> > +
> > +	/* Avoid possible overflow */
> > +	if (++zone->compact_considered > defer_limit)
> > +		zone->compact_considered = defer_limit;
> > +
> > +	if (zone->compact_considered >= defer_limit)
> 
> - no tracepoint here as well? Oh did you want to trace just when it's true? That
> makes sense, but then just remove the reason part.

Yes, it's my intention to print trace when true.

> Hm what if we avoided dirtying the cache line in the non-deferred case? Would be
> simpler, too?
> 
> if (zone->compact_considered + 1 >= defer_limit)
>      return false;
> 
> zone->compact_considered++;
> 
> trace_mm_compaction_defer_compaction(zone, order);
> 
> return true;

Okay. I will include this minor optimization in next version of this
patch.

Thanks.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
@ 2015-01-13  7:18       ` Joonsoo Kim
  0 siblings, 0 replies; 28+ messages in thread
From: Joonsoo Kim @ 2015-01-13  7:18 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On Mon, Jan 12, 2015 at 05:35:47PM +0100, Vlastimil Babka wrote:
> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> > compaction deferring logic is heavy hammer that block the way to
> > the compaction. It doesn't consider overall system state, so it
> > could prevent user from doing compaction falsely. In other words,
> > even if system has enough range of memory to compact, compaction would be
> > skipped due to compaction deferring logic. This patch add new tracepoint
> > to understand work of deferring logic. This will also help to check
> > compaction success and fail.
> > 
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > ---
> >  include/linux/compaction.h        |   65 +++------------------------------
> >  include/trace/events/compaction.h |   55 ++++++++++++++++++++++++++++
> >  mm/compaction.c                   |   72 +++++++++++++++++++++++++++++++++++++
> >  3 files changed, 132 insertions(+), 60 deletions(-)
> > 
> > diff --git a/include/linux/compaction.h b/include/linux/compaction.h
> > index d82181a..026ff64 100644
> > --- a/include/linux/compaction.h
> > +++ b/include/linux/compaction.h
> > @@ -44,66 +44,11 @@ extern void reset_isolation_suitable(pg_data_t *pgdat);
> >  extern unsigned long compaction_suitable(struct zone *zone, int order,
> >  					int alloc_flags, int classzone_idx);
> >  
> > -/* Do not skip compaction more than 64 times */
> > -#define COMPACT_MAX_DEFER_SHIFT 6
> > -
> > -/*
> > - * Compaction is deferred when compaction fails to result in a page
> > - * allocation success. 1 << compact_defer_limit compactions are skipped up
> > - * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> > - */
> > -static inline void defer_compaction(struct zone *zone, int order)
> > -{
> > -	zone->compact_considered = 0;
> > -	zone->compact_defer_shift++;
> > -
> > -	if (order < zone->compact_order_failed)
> > -		zone->compact_order_failed = order;
> > -
> > -	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> > -		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> > -}
> > -
> > -/* Returns true if compaction should be skipped this time */
> > -static inline bool compaction_deferred(struct zone *zone, int order)
> > -{
> > -	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> > -
> > -	if (order < zone->compact_order_failed)
> > -		return false;
> > -
> > -	/* Avoid possible overflow */
> > -	if (++zone->compact_considered > defer_limit)
> > -		zone->compact_considered = defer_limit;
> > -
> > -	return zone->compact_considered < defer_limit;
> > -}
> > -
> > -/*
> > - * Update defer tracking counters after successful compaction of given order,
> > - * which means an allocation either succeeded (alloc_success == true) or is
> > - * expected to succeed.
> > - */
> > -static inline void compaction_defer_reset(struct zone *zone, int order,
> > -		bool alloc_success)
> > -{
> > -	if (alloc_success) {
> > -		zone->compact_considered = 0;
> > -		zone->compact_defer_shift = 0;
> > -	}
> > -	if (order >= zone->compact_order_failed)
> > -		zone->compact_order_failed = order + 1;
> > -}
> > -
> > -/* Returns true if restarting compaction after many failures */
> > -static inline bool compaction_restarting(struct zone *zone, int order)
> > -{
> > -	if (order < zone->compact_order_failed)
> > -		return false;
> > -
> > -	return zone->compact_defer_shift == COMPACT_MAX_DEFER_SHIFT &&
> > -		zone->compact_considered >= 1UL << zone->compact_defer_shift;
> > -}
> > +extern void defer_compaction(struct zone *zone, int order);
> > +extern bool compaction_deferred(struct zone *zone, int order);
> > +extern void compaction_defer_reset(struct zone *zone, int order,
> > +				bool alloc_success);
> > +extern bool compaction_restarting(struct zone *zone, int order);
> >  
> >  #else
> >  static inline unsigned long try_to_compact_pages(struct zonelist *zonelist,
> > diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
> > index 839dd4f..f879f41 100644
> > --- a/include/trace/events/compaction.h
> > +++ b/include/trace/events/compaction.h
> > @@ -258,6 +258,61 @@ DEFINE_EVENT(mm_compaction_suitable_template, mm_compaction_suitable,
> >  	TP_ARGS(zone, order, alloc_flags, classzone_idx, ret)
> >  );
> >  
> > +DECLARE_EVENT_CLASS(mm_compaction_defer_template,
> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order),
> > +
> > +	TP_STRUCT__entry(
> > +		__field(int, nid)
> > +		__field(char *, name)
> > +		__field(int, order)
> > +		__field(unsigned int, considered)
> > +		__field(unsigned int, defer_shift)
> > +		__field(int, order_failed)
> > +	),
> > +
> > +	TP_fast_assign(
> > +		__entry->nid = zone_to_nid(zone);
> > +		__entry->name = (char *)zone->name;
> > +		__entry->order = order;
> > +		__entry->considered = zone->compact_considered;
> > +		__entry->defer_shift = zone->compact_defer_shift;
> > +		__entry->order_failed = zone->compact_order_failed;
> > +	),
> > +
> > +	TP_printk("node=%d zone=%-8s order=%d order_failed=%d reason=%s consider=%u limit=%lu",
> > +		__entry->nid,
> > +		__entry->name,
> > +		__entry->order,
> > +		__entry->order_failed,
> > +		__entry->order < __entry->order_failed ? "order" : "try",
> 
> This "reason" only makes sense for compaction_deferred, no? And "order" would
> never be printed there anyway, because of bug below. Also it's quite trivial to
> derive from the other data printed, so I would just remove it.

Will remove.

> 
> > +		__entry->considered,
> > +		1UL << __entry->defer_shift)
> > +);
> > +
> > +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_deffered,
> 
>                                                             _deferred

Okay.

> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order)
> > +);
> > +
> > +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_compaction,
> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order)
> > +);
> > +
> > +DEFINE_EVENT(mm_compaction_defer_template, mm_compaction_defer_reset,
> > +
> > +	TP_PROTO(struct zone *zone, int order),
> > +
> > +	TP_ARGS(zone, order)
> > +);
> > +
> >  #endif /* _TRACE_COMPACTION_H */
> >  
> >  /* This part must be outside protection */
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 7500f01..7aa4249 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -123,6 +123,77 @@ static struct page *pageblock_pfn_to_page(unsigned long start_pfn,
> >  }
> >  
> >  #ifdef CONFIG_COMPACTION
> > +
> > +/* Do not skip compaction more than 64 times */
> > +#define COMPACT_MAX_DEFER_SHIFT 6
> > +
> > +/*
> > + * Compaction is deferred when compaction fails to result in a page
> > + * allocation success. 1 << compact_defer_limit compactions are skipped up
> > + * to a limit of 1 << COMPACT_MAX_DEFER_SHIFT
> > + */
> > +void defer_compaction(struct zone *zone, int order)
> > +{
> > +	zone->compact_considered = 0;
> > +	zone->compact_defer_shift++;
> > +
> > +	if (order < zone->compact_order_failed)
> > +		zone->compact_order_failed = order;
> > +
> > +	if (zone->compact_defer_shift > COMPACT_MAX_DEFER_SHIFT)
> > +		zone->compact_defer_shift = COMPACT_MAX_DEFER_SHIFT;
> > +
> > +	trace_mm_compaction_defer_compaction(zone, order);
> > +}
> > +
> > +/* Returns true if compaction should be skipped this time */
> > +bool compaction_deferred(struct zone *zone, int order)
> > +{
> > +	unsigned long defer_limit = 1UL << zone->compact_defer_shift;
> > +
> > +	if (order < zone->compact_order_failed)
> 
> - no tracepoint (with reason="order") in this case?
> 
> > +		return false;
> > +
> > +	/* Avoid possible overflow */
> > +	if (++zone->compact_considered > defer_limit)
> > +		zone->compact_considered = defer_limit;
> > +
> > +	if (zone->compact_considered >= defer_limit)
> 
> - no tracepoint here as well? Oh did you want to trace just when it's true? That
> makes sense, but then just remove the reason part.

Yes, it's my intention to print trace when true.

> Hm what if we avoided dirtying the cache line in the non-deferred case? Would be
> simpler, too?
> 
> if (zone->compact_considered + 1 >= defer_limit)
>      return false;
> 
> zone->compact_considered++;
> 
> trace_mm_compaction_defer_compaction(zone, order);
> 
> return true;

Okay. I will include this minor optimization in next version of this
patch.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
  2015-01-13  7:16       ` Joonsoo Kim
@ 2015-01-13  8:29         ` Vlastimil Babka
  -1 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-13  8:29 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/13/2015 08:16 AM, Joonsoo Kim wrote:
> On Mon, Jan 12, 2015 at 04:53:53PM +0100, Vlastimil Babka wrote:
>> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
>> > It is not well analyzed that when/why compaction start/finish or not. With
>> > these new tracepoints, we can know much more about start/finish reason of
>> > compaction. I can find following bug with these tracepoint.
>> > 
>> > http://www.spinics.net/lists/linux-mm/msg81582.html
>> > 
>> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> > ---
>> >  include/linux/compaction.h        |    3 ++
>> >  include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
>> >  mm/compaction.c                   |   41 ++++++++++++++--
>> >  3 files changed, 134 insertions(+), 4 deletions(-)
>> > 
>> > diff --git a/include/linux/compaction.h b/include/linux/compaction.h
>> > index a9547b6..d82181a 100644
>> > --- a/include/linux/compaction.h
>> > +++ b/include/linux/compaction.h
>> > @@ -12,6 +12,9 @@
>> >  #define COMPACT_PARTIAL		3
>> >  /* The full zone was compacted */
>> >  #define COMPACT_COMPLETE	4
>> > +/* For more detailed tracepoint output */
>> > +#define COMPACT_NO_SUITABLE_PAGE	5
>> > +#define COMPACT_NOT_SUITABLE_ZONE	6
>> >  /* When adding new state, please change compaction_status_string, too */
>> >  
>> >  /* Used to signal whether compaction detected need_sched() or lock contention */
>> > diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
>> > index 139020b..839dd4f 100644
>> > --- a/include/trace/events/compaction.h
>> > +++ b/include/trace/events/compaction.h
>> > @@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
>> >  		compaction_status_string[__entry->status])
>> >  );
>> >  
>> > +TRACE_EVENT(mm_compaction_try_to_compact_pages,
>> > +
>> > +	TP_PROTO(
>> > +		int order,
>> > +		gfp_t gfp_mask,
>> > +		enum migrate_mode mode,
>> > +		int alloc_flags,
>> > +		int classzone_idx),
>> 
>> I wonder if alloc_flags and classzone_idx is particularly useful. It affects the
>> watermark checks, but those are a bit of blackbox anyway.
> 
> Yes, I think so. How about printing gfp_flag rather than these? It would
> tell us migratetype and other information so would be useful.

Yeah gfp_mask should be enough.

> 
> Thanks.
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish
@ 2015-01-13  8:29         ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-13  8:29 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/13/2015 08:16 AM, Joonsoo Kim wrote:
> On Mon, Jan 12, 2015 at 04:53:53PM +0100, Vlastimil Babka wrote:
>> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
>> > It is not well analyzed that when/why compaction start/finish or not. With
>> > these new tracepoints, we can know much more about start/finish reason of
>> > compaction. I can find following bug with these tracepoint.
>> > 
>> > http://www.spinics.net/lists/linux-mm/msg81582.html
>> > 
>> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> > ---
>> >  include/linux/compaction.h        |    3 ++
>> >  include/trace/events/compaction.h |   94 +++++++++++++++++++++++++++++++++++++
>> >  mm/compaction.c                   |   41 ++++++++++++++--
>> >  3 files changed, 134 insertions(+), 4 deletions(-)
>> > 
>> > diff --git a/include/linux/compaction.h b/include/linux/compaction.h
>> > index a9547b6..d82181a 100644
>> > --- a/include/linux/compaction.h
>> > +++ b/include/linux/compaction.h
>> > @@ -12,6 +12,9 @@
>> >  #define COMPACT_PARTIAL		3
>> >  /* The full zone was compacted */
>> >  #define COMPACT_COMPLETE	4
>> > +/* For more detailed tracepoint output */
>> > +#define COMPACT_NO_SUITABLE_PAGE	5
>> > +#define COMPACT_NOT_SUITABLE_ZONE	6
>> >  /* When adding new state, please change compaction_status_string, too */
>> >  
>> >  /* Used to signal whether compaction detected need_sched() or lock contention */
>> > diff --git a/include/trace/events/compaction.h b/include/trace/events/compaction.h
>> > index 139020b..839dd4f 100644
>> > --- a/include/trace/events/compaction.h
>> > +++ b/include/trace/events/compaction.h
>> > @@ -164,6 +164,100 @@ TRACE_EVENT(mm_compaction_end,
>> >  		compaction_status_string[__entry->status])
>> >  );
>> >  
>> > +TRACE_EVENT(mm_compaction_try_to_compact_pages,
>> > +
>> > +	TP_PROTO(
>> > +		int order,
>> > +		gfp_t gfp_mask,
>> > +		enum migrate_mode mode,
>> > +		int alloc_flags,
>> > +		int classzone_idx),
>> 
>> I wonder if alloc_flags and classzone_idx is particularly useful. It affects the
>> watermark checks, but those are a bit of blackbox anyway.
> 
> Yes, I think so. How about printing gfp_flag rather than these? It would
> tell us migratetype and other information so would be useful.

Yeah gfp_mask should be enough.

> 
> Thanks.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
  2015-01-13  7:18       ` Joonsoo Kim
@ 2015-01-13  8:35         ` Vlastimil Babka
  -1 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-13  8:35 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/13/2015 08:18 AM, Joonsoo Kim wrote:
> On Mon, Jan 12, 2015 at 05:35:47PM +0100, Vlastimil Babka wrote:
>> Hm what if we avoided dirtying the cache line in the non-deferred case? Would be
>> simpler, too?
>> 
>> if (zone->compact_considered + 1 >= defer_limit)
>>      return false;
>> 
>> zone->compact_considered++;
>> 
>> trace_mm_compaction_defer_compaction(zone, order);
>> 
>> return true;
> 
> Okay. I will include this minor optimization in next version of this
> patch.

Hm, on second thought, the "+ 1" part would break compaction_restarting() and
it's ugly anyway. Removing "+ 1" would increase the number of
compaction_deferred() attempts until success by one. Which should be negligible,
but maybe not good to hide it in a tracepoint patch. Sorry for the noise.

> Thanks.
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer
@ 2015-01-13  8:35         ` Vlastimil Babka
  0 siblings, 0 replies; 28+ messages in thread
From: Vlastimil Babka @ 2015-01-13  8:35 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Mel Gorman, David Rientjes, linux-mm, linux-kernel

On 01/13/2015 08:18 AM, Joonsoo Kim wrote:
> On Mon, Jan 12, 2015 at 05:35:47PM +0100, Vlastimil Babka wrote:
>> Hm what if we avoided dirtying the cache line in the non-deferred case? Would be
>> simpler, too?
>> 
>> if (zone->compact_considered + 1 >= defer_limit)
>>      return false;
>> 
>> zone->compact_considered++;
>> 
>> trace_mm_compaction_defer_compaction(zone, order);
>> 
>> return true;
> 
> Okay. I will include this minor optimization in next version of this
> patch.

Hm, on second thought, the "+ 1" part would break compaction_restarting() and
it's ugly anyway. Removing "+ 1" would increase the number of
compaction_deferred() attempts until success by one. Which should be negligible,
but maybe not good to hide it in a tracepoint patch. Sorry for the noise.

> Thanks.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2015-01-13  8:35 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-12  8:21 [PATCH v2 1/5] mm/compaction: change tracepoint format from decimal to hexadecimal Joonsoo Kim
2015-01-12  8:21 ` Joonsoo Kim
2015-01-12  8:21 ` [PATCH v2 2/5] mm/compaction: enhance tracepoint output for compaction begin/end Joonsoo Kim
2015-01-12  8:21   ` Joonsoo Kim
2015-01-12 14:32   ` Vlastimil Babka
2015-01-12 14:32     ` Vlastimil Babka
2015-01-12  8:21 ` [PATCH v2 3/5] mm/compaction: print current range where compaction work Joonsoo Kim
2015-01-12  8:21   ` Joonsoo Kim
2015-01-12 14:34   ` Vlastimil Babka
2015-01-12 14:34     ` Vlastimil Babka
2015-01-12  8:21 ` [PATCH v2 4/5] mm/compaction: more trace to understand when/why compaction start/finish Joonsoo Kim
2015-01-12  8:21   ` Joonsoo Kim
2015-01-12 15:53   ` Vlastimil Babka
2015-01-12 15:53     ` Vlastimil Babka
2015-01-13  7:16     ` Joonsoo Kim
2015-01-13  7:16       ` Joonsoo Kim
2015-01-13  8:29       ` Vlastimil Babka
2015-01-13  8:29         ` Vlastimil Babka
2015-01-12  8:21 ` [PATCH v2 5/5] mm/compaction: add tracepoint to observe behaviour of compaction defer Joonsoo Kim
2015-01-12  8:21   ` Joonsoo Kim
2015-01-12 16:35   ` Vlastimil Babka
2015-01-12 16:35     ` Vlastimil Babka
2015-01-13  7:18     ` Joonsoo Kim
2015-01-13  7:18       ` Joonsoo Kim
2015-01-13  8:35       ` Vlastimil Babka
2015-01-13  8:35         ` Vlastimil Babka
2015-01-12 14:23 ` [PATCH v2 1/5] mm/compaction: change tracepoint format from decimal to hexadecimal Vlastimil Babka
2015-01-12 14:23   ` Vlastimil Babka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.