linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface
@ 2016-07-01  6:40 Ganesh Mahendran
  2016-07-01  6:41 ` [PATCH 2/8] mm/zsmalloc: add per class compact trace event Ganesh Mahendran
                   ` (6 more replies)
  0 siblings, 7 replies; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:40 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

1. change trace_zsmalloc_compact_* to trace_zs_compact_* to keep
consistent with other definition in zsmalloc module.

2. remove pages_total_compacted information in trace_zs_compact_end(),
since this is not very userfull for per zs_compact.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 include/trace/events/zsmalloc.h | 16 ++++++----------
 mm/zsmalloc.c                   |  7 +++----
 2 files changed, 9 insertions(+), 14 deletions(-)

diff --git a/include/trace/events/zsmalloc.h b/include/trace/events/zsmalloc.h
index 3b6f14e..c7a39f4 100644
--- a/include/trace/events/zsmalloc.h
+++ b/include/trace/events/zsmalloc.h
@@ -7,7 +7,7 @@
 #include <linux/types.h>
 #include <linux/tracepoint.h>
 
-TRACE_EVENT(zsmalloc_compact_start,
+TRACE_EVENT(zs_compact_start,
 
 	TP_PROTO(const char *pool_name),
 
@@ -25,29 +25,25 @@ TRACE_EVENT(zsmalloc_compact_start,
 		  __entry->pool_name)
 );
 
-TRACE_EVENT(zsmalloc_compact_end,
+TRACE_EVENT(zs_compact_end,
 
-	TP_PROTO(const char *pool_name, unsigned long pages_compacted,
-			unsigned long pages_total_compacted),
+	TP_PROTO(const char *pool_name, unsigned long pages_compacted),
 
-	TP_ARGS(pool_name, pages_compacted, pages_total_compacted),
+	TP_ARGS(pool_name, pages_compacted),
 
 	TP_STRUCT__entry(
 		__field(const char *, pool_name)
 		__field(unsigned long, pages_compacted)
-		__field(unsigned long, pages_total_compacted)
 	),
 
 	TP_fast_assign(
 		__entry->pool_name = pool_name;
 		__entry->pages_compacted = pages_compacted;
-		__entry->pages_total_compacted = pages_total_compacted;
 	),
 
-	TP_printk("pool %s: %ld pages compacted(total %ld)",
+	TP_printk("pool %s: %ld pages compacted",
 		  __entry->pool_name,
-		  __entry->pages_compacted,
-		  __entry->pages_total_compacted)
+		  __entry->pages_compacted)
 );
 
 #endif /* _TRACE_ZSMALLOC_H */
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e425de4..c7f79d5 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -2323,7 +2323,7 @@ unsigned long zs_compact(struct zs_pool *pool)
 	struct size_class *class;
 	unsigned long pages_compacted_before = pool->stats.pages_compacted;
 
-	trace_zsmalloc_compact_start(pool->name);
+	trace_zs_compact_start(pool->name);
 
 	for (i = zs_size_classes - 1; i >= 0; i--) {
 		class = pool->size_class[i];
@@ -2334,9 +2334,8 @@ unsigned long zs_compact(struct zs_pool *pool)
 		__zs_compact(pool, class);
 	}
 
-	trace_zsmalloc_compact_end(pool->name,
-		pool->stats.pages_compacted - pages_compacted_before,
-		pool->stats.pages_compacted);
+	trace_zs_compact_end(pool->name,
+		pool->stats.pages_compacted - pages_compacted_before);
 
 	return pool->stats.pages_compacted;
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/8] mm/zsmalloc: add per class compact trace event
  2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
@ 2016-07-01  6:41 ` Ganesh Mahendran
  2016-07-03 23:49   ` Minchan Kim
  2016-07-01  6:41 ` [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:41 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

add per class compact trace event. It will show how many zs pages
isolated, how many zs pages reclaimed.

----
  <...>-627   [002] ....   192.641122: zs_compact_start: pool zram0
  <...>-627   [002] ....   192.641166: zs_compact_class: class 254: 0 zspage isolated, 0 reclaimed
  <...>-627   [002] ....   192.641169: zs_compact_class: class 202: 0 zspage isolated, 0 reclaimed
  <...>-627   [002] ....   192.641172: zs_compact_class: class 190: 0 zspage isolated, 0 reclaimed
  <...>-627   [002] ....   192.641180: zs_compact_class: class 168: 3 zspage isolated, 1 reclaimed
  <...>-627   [002] ....   192.641190: zs_compact_class: class 151: 3 zspage isolated, 1 reclaimed
  <...>-627   [002] ....   192.641201: zs_compact_class: class 144: 6 zspage isolated, 1 reclaimed
  <...>-627   [002] ....   192.641224: zs_compact_class: class 126: 24 zspage isolated, 12 reclaimed
  <...>-627   [002] ....   192.641261: zs_compact_class: class 111: 10 zspage isolated, 2 reclaimed
kswapd0-627   [002] ....   192.641333: zs_compact_class: class 107: 38 zspage isolated, 8 reclaimed
kswapd0-627   [002] ....   192.641415: zs_compact_class: class 100: 45 zspage isolated, 12 reclaimed
kswapd0-627   [002] ....   192.641481: zs_compact_class: class  94: 24 zspage isolated, 5 reclaimed
kswapd0-627   [002] ....   192.641568: zs_compact_class: class  91: 69 zspage isolated, 14 reclaimed
kswapd0-627   [002] ....   192.641688: zs_compact_class: class  83: 120 zspage isolated, 47 reclaimed
kswapd0-627   [002] ....   192.641765: zs_compact_class: class  76: 34 zspage isolated, 5 reclaimed
kswapd0-627   [002] ....   192.641832: zs_compact_class: class  74: 34 zspage isolated, 6 reclaimed
kswapd0-627   [002] ....   192.641958: zs_compact_class: class  71: 66 zspage isolated, 17 reclaimed
kswapd0-627   [002] ....   192.642000: zs_compact_class: class  67: 17 zspage isolated, 3 reclaimed
kswapd0-627   [002] ....   192.642063: zs_compact_class: class  66: 29 zspage isolated, 5 reclaimed
kswapd0-627   [002] ....   192.642113: zs_compact_class: class  62: 38 zspage isolated, 12 reclaimed
kswapd0-627   [002] ....   192.642143: zs_compact_class: class  58: 8 zspage isolated, 1 reclaimed
kswapd0-627   [002] ....   192.642176: zs_compact_class: class  57: 25 zspage isolated, 5 reclaimed
kswapd0-627   [002] ....   192.642184: zs_compact_class: class  54: 11 zspage isolated, 2 reclaimed
kswapd0-627   [002] ....   192.642191: zs_compact_class: class  52: 5 zspage isolated, 1 reclaimed
kswapd0-627   [002] ....   192.642201: zs_compact_class: class  51: 6 zspage isolated, 1 reclaimed
kswapd0-627   [002] ....   192.642211: zs_compact_class: class  49: 11 zspage isolated, 3 reclaimed
kswapd0-627   [002] ....   192.642216: zs_compact_class: class  46: 2 zspage isolated, 1 reclaimed
kswapd0-627   [002] ....   192.642218: zs_compact_class: class  44: 0 zspage isolated, 0 reclaimed
kswapd0-627   [002] ....   192.642221: zs_compact_class: class  43: 0 zspage isolated, 0 reclaimed
  ...
----

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 include/trace/events/zsmalloc.h | 24 ++++++++++++++++++++++++
 mm/zsmalloc.c                   | 16 +++++++++++++++-
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/include/trace/events/zsmalloc.h b/include/trace/events/zsmalloc.h
index c7a39f4..e745246 100644
--- a/include/trace/events/zsmalloc.h
+++ b/include/trace/events/zsmalloc.h
@@ -46,6 +46,30 @@ TRACE_EVENT(zs_compact_end,
 		  __entry->pages_compacted)
 );
 
+TRACE_EVENT(zs_compact_class,
+
+	TP_PROTO(int class, unsigned long zspage_isolated, unsigned long zspage_reclaimed),
+
+	TP_ARGS(class, zspage_isolated, zspage_reclaimed),
+
+	TP_STRUCT__entry(
+		__field(int, class)
+		__field(unsigned long, zspage_isolated)
+		__field(unsigned long, zspage_reclaimed)
+	),
+
+	TP_fast_assign(
+		__entry->class = class;
+		__entry->zspage_isolated = zspage_isolated;
+		__entry->zspage_reclaimed = zspage_reclaimed;
+	),
+
+	TP_printk("class %3d: %ld zspage isolated, %ld zspage reclaimed",
+		  __entry->class,
+		  __entry->zspage_isolated,
+		  __entry->zspage_reclaimed)
+);
+
 #endif /* _TRACE_ZSMALLOC_H */
 
 /* This part must be outside protection */
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c7f79d5..405baa5 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1780,6 +1780,11 @@ struct zs_compact_control {
 	 /* Starting object index within @s_page which used for live object
 	  * in the subpage. */
 	int index;
+
+	/* zspage isolated */
+	unsigned long nr_isolated;
+	/* zspage reclaimed */
+	unsigned long nr_reclaimed;
 };
 
 static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
@@ -2272,7 +2277,10 @@ static unsigned long zs_can_compact(struct size_class *class)
 
 static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 {
-	struct zs_compact_control cc;
+	struct zs_compact_control cc = {
+		.nr_isolated = 0,
+		.nr_reclaimed = 0,
+	};
 	struct zspage *src_zspage;
 	struct zspage *dst_zspage = NULL;
 
@@ -2282,10 +2290,13 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		if (!zs_can_compact(class))
 			break;
 
+		cc.nr_isolated++;
+
 		cc.index = 0;
 		cc.s_page = get_first_page(src_zspage);
 
 		while ((dst_zspage = isolate_zspage(class, false))) {
+			cc.nr_isolated++;
 			cc.d_page = get_first_page(dst_zspage);
 			/*
 			 * If there is no more space in dst_page, resched
@@ -2304,6 +2315,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		putback_zspage(class, dst_zspage);
 		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
 			free_zspage(pool, class, src_zspage);
+			cc.nr_reclaimed++;
 			pool->stats.pages_compacted += class->pages_per_zspage;
 		}
 		spin_unlock(&class->lock);
@@ -2315,6 +2327,8 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		putback_zspage(class, src_zspage);
 
 	spin_unlock(&class->lock);
+
+	trace_zs_compact_class(class->index, cc.nr_isolated, cc.nr_reclaimed);
 }
 
 unsigned long zs_compact(struct zs_pool *pool)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj
  2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
  2016-07-01  6:41 ` [PATCH 2/8] mm/zsmalloc: add per class compact trace event Ganesh Mahendran
@ 2016-07-01  6:41 ` Ganesh Mahendran
  2016-07-03 23:57   ` Minchan Kim
  2016-07-01  6:41 ` [PATCH 4/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:41 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

the obj index value should be updated after return from
find_alloced_obj()

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 mm/zsmalloc.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 405baa5..5c96ed1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1744,15 +1744,16 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
  * return handle.
  */
 static unsigned long find_alloced_obj(struct size_class *class,
-					struct page *page, int index)
+					struct page *page, int *index)
 {
 	unsigned long head;
 	int offset = 0;
+	int objidx = *index;
 	unsigned long handle = 0;
 	void *addr = kmap_atomic(page);
 
 	offset = get_first_obj_offset(page);
-	offset += class->size * index;
+	offset += class->size * objidx;
 
 	while (offset < PAGE_SIZE) {
 		head = obj_to_head(page, addr + offset);
@@ -1764,9 +1765,11 @@ static unsigned long find_alloced_obj(struct size_class *class,
 		}
 
 		offset += class->size;
-		index++;
+		objidx++;
 	}
 
+	*index = objidx;
+
 	kunmap_atomic(addr);
 	return handle;
 }
@@ -1794,11 +1797,11 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	unsigned long handle;
 	struct page *s_page = cc->s_page;
 	struct page *d_page = cc->d_page;
-	unsigned long index = cc->index;
+	unsigned int index = cc->index;
 	int ret = 0;
 
 	while (1) {
-		handle = find_alloced_obj(class, s_page, index);
+		handle = find_alloced_obj(class, s_page, &index);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects
  2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
  2016-07-01  6:41 ` [PATCH 2/8] mm/zsmalloc: add per class compact trace event Ganesh Mahendran
  2016-07-01  6:41 ` [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
@ 2016-07-01  6:41 ` Ganesh Mahendran
  2016-07-03 23:58   ` Minchan Kim
  2016-07-01  6:41 ` [PATCH 5/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:41 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

num of max objects in zspage is stored in each size_class now.
So there is no need to re-calculate it.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 mm/zsmalloc.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5c96ed1..50283b1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -638,8 +638,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v)
 		freeable = zs_can_compact(class);
 		spin_unlock(&class->lock);
 
-		objs_per_zspage = get_maxobj_per_zspage(class->size,
-				class->pages_per_zspage);
+		objs_per_zspage = class->objs_per_zspage;
 		pages_used = obj_allocated / objs_per_zspage *
 				class->pages_per_zspage;
 
@@ -1017,8 +1016,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 
 	cache_free_zspage(pool, zspage);
 
-	zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
+	zs_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage);
 	atomic_long_sub(class->pages_per_zspage,
 					&pool->pages_allocated);
 }
@@ -1369,7 +1367,7 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
 	if (prev->pages_per_zspage != pages_per_zspage)
 		return false;
 
-	if (get_maxobj_per_zspage(prev->size, prev->pages_per_zspage)
+	if (prev->objs_per_zspage
 		!= get_maxobj_per_zspage(size, pages_per_zspage))
 		return false;
 
@@ -1595,8 +1593,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	record_obj(handle, obj);
 	atomic_long_add(class->pages_per_zspage,
 				&pool->pages_allocated);
-	zs_stat_inc(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
+	zs_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
 
 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
@@ -2272,8 +2269,7 @@ static unsigned long zs_can_compact(struct size_class *class)
 		return 0;
 
 	obj_wasted = obj_allocated - obj_used;
-	obj_wasted /= get_maxobj_per_zspage(class->size,
-			class->pages_per_zspage);
+	obj_wasted /= class->objs_per_zspage;
 
 	return obj_wasted * class->pages_per_zspage;
 }
@@ -2495,8 +2491,8 @@ struct zs_pool *zs_create_pool(const char *name)
 		class->size = size;
 		class->index = i;
 		class->pages_per_zspage = pages_per_zspage;
-		class->objs_per_zspage = class->pages_per_zspage *
-						PAGE_SIZE / class->size;
+		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
+							class->pages_per_zspage);
 		spin_lock_init(&class->lock);
 		pool->size_class[i] = class;
 		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 5/8] mm/zsmalloc: avoid calculate max objects of zspage twice
  2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
                   ` (2 preceding siblings ...)
  2016-07-01  6:41 ` [PATCH 4/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
@ 2016-07-01  6:41 ` Ganesh Mahendran
  2016-07-04  0:03   ` Minchan Kim
  2016-07-01  6:41 ` [PATCH 6/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:41 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

Currently, if a class can not be merged, the max objects of zspage
in that class may be calculated twice.

This patch calculate max objects of zspage at the begin, and pass
the value to can_merge() to decide whether the class can be merged.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 mm/zsmalloc.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 50283b1..2690914 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1362,16 +1362,14 @@ static void init_zs_size_classes(void)
 	zs_size_classes = nr;
 }
 
-static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
+static bool can_merge(struct size_class *prev, int pages_per_zspage,
+					int objs_per_zspage)
 {
-	if (prev->pages_per_zspage != pages_per_zspage)
-		return false;
-
-	if (prev->objs_per_zspage
-		!= get_maxobj_per_zspage(size, pages_per_zspage))
-		return false;
+	if (prev->pages_per_zspage == pages_per_zspage &&
+		prev->objs_per_zspage == objs_per_zspage)
+		return true;
 
-	return true;
+	return false;
 }
 
 static bool zspage_full(struct size_class *class, struct zspage *zspage)
@@ -2460,6 +2458,7 @@ struct zs_pool *zs_create_pool(const char *name)
 	for (i = zs_size_classes - 1; i >= 0; i--) {
 		int size;
 		int pages_per_zspage;
+		int objs_per_zspage;
 		struct size_class *class;
 		int fullness = 0;
 
@@ -2467,6 +2466,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		if (size > ZS_MAX_ALLOC_SIZE)
 			size = ZS_MAX_ALLOC_SIZE;
 		pages_per_zspage = get_pages_per_zspage(size);
+		objs_per_zspage = get_maxobj_per_zspage(size, pages_per_zspage);
 
 		/*
 		 * size_class is used for normal zsmalloc operation such
@@ -2478,7 +2478,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		 * previous size_class if possible.
 		 */
 		if (prev_class) {
-			if (can_merge(prev_class, size, pages_per_zspage)) {
+			if (can_merge(prev_class, pages_per_zspage, objs_per_zspage)) {
 				pool->size_class[i] = prev_class;
 				continue;
 			}
@@ -2491,8 +2491,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		class->size = size;
 		class->index = i;
 		class->pages_per_zspage = pages_per_zspage;
-		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
-							class->pages_per_zspage);
+		class->objs_per_zspage = objs_per_zspage;
 		spin_lock_init(&class->lock);
 		pool->size_class[i] = class;
 		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 6/8] mm/zsmalloc: keep comments consistent with code
  2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
                   ` (3 preceding siblings ...)
  2016-07-01  6:41 ` [PATCH 5/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
@ 2016-07-01  6:41 ` Ganesh Mahendran
  2016-07-04  0:05   ` Minchan Kim
  2016-07-01  6:41 ` [PATCH 7/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
  2016-07-01  6:41 ` [PATCH 8/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
  6 siblings, 1 reply; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:41 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

some minor change of comments:
1). update zs_malloc(),zs_create_pool() function header
2). update "Usage of struct page fields"

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 mm/zsmalloc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 2690914..6fc631a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -20,6 +20,7 @@
  *	page->freelist(index): links together all component pages of a zspage
  *		For the huge page, this is always 0, so we use this field
  *		to store handle.
+ *	page->units: first object index in a subpage of zspage
  *
  * Usage of struct page flags:
  *	PG_private: identifies the first component page
@@ -140,9 +141,6 @@
  */
 #define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> CLASS_BITS)
 
-/*
- * We do not maintain any list for completely empty or full pages
- */
 enum fullness_group {
 	ZS_EMPTY,
 	ZS_ALMOST_EMPTY,
@@ -1540,6 +1538,7 @@ static unsigned long obj_malloc(struct size_class *class,
  * zs_malloc - Allocate block of given size from pool.
  * @pool: pool to allocate from
  * @size: size of block to allocate
+ * @gfp: gfp flags when allocating object
  *
  * On success, handle to the allocated object is returned,
  * otherwise 0.
@@ -2418,7 +2417,7 @@ static int zs_register_shrinker(struct zs_pool *pool)
 
 /**
  * zs_create_pool - Creates an allocation pool to work from.
- * @flags: allocation flags used to allocate pool metadata
+ * @name: pool name to be created
  *
  * This function must be called before anything when using
  * the zsmalloc allocator.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 7/8] mm/zsmalloc: add __init,__exit attribute
  2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
                   ` (4 preceding siblings ...)
  2016-07-01  6:41 ` [PATCH 6/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
@ 2016-07-01  6:41 ` Ganesh Mahendran
  2016-07-04  0:09   ` Minchan Kim
  2016-07-01  6:41 ` [PATCH 8/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
  6 siblings, 1 reply; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:41 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

Add __init,__exit attribute for function that is only called in
module init/exit

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 mm/zsmalloc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 6fc631a..1c7460b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1349,7 +1349,7 @@ static void zs_unregister_cpu_notifier(void)
 	cpu_notifier_register_done();
 }
 
-static void init_zs_size_classes(void)
+static void __init init_zs_size_classes(void)
 {
 	int nr;
 
@@ -1896,7 +1896,7 @@ static struct file_system_type zsmalloc_fs = {
 	.kill_sb	= kill_anon_super,
 };
 
-static int zsmalloc_mount(void)
+static int __init zsmalloc_mount(void)
 {
 	int ret = 0;
 
@@ -1907,7 +1907,7 @@ static int zsmalloc_mount(void)
 	return ret;
 }
 
-static void zsmalloc_unmount(void)
+static void __exit zsmalloc_unmount(void)
 {
 	kern_unmount(zsmalloc_mnt);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 8/8] mm/zsmalloc: use helper to clear page->flags bit
  2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
                   ` (5 preceding siblings ...)
  2016-07-01  6:41 ` [PATCH 7/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
@ 2016-07-01  6:41 ` Ganesh Mahendran
  2016-07-04  0:11   ` Minchan Kim
  6 siblings, 1 reply; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-01  6:41 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

user ClearPagePrivate/ClearPagePrivate2 helper to clear
PG_private/PG_private_2 in page->flags

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
---
 mm/zsmalloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1c7460b..356db9a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -945,8 +945,8 @@ static void unpin_tag(unsigned long handle)
 static void reset_page(struct page *page)
 {
 	__ClearPageMovable(page);
-	clear_bit(PG_private, &page->flags);
-	clear_bit(PG_private_2, &page->flags);
+	ClearPagePrivate(page);
+	ClearPagePrivate2(page);
 	set_page_private(page, 0);
 	page_mapcount_reset(page);
 	ClearPageHugeObject(page);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/8] mm/zsmalloc: add per class compact trace event
  2016-07-01  6:41 ` [PATCH 2/8] mm/zsmalloc: add per class compact trace event Ganesh Mahendran
@ 2016-07-03 23:49   ` Minchan Kim
  2016-07-04  3:12     ` Ganesh Mahendran
  0 siblings, 1 reply; 19+ messages in thread
From: Minchan Kim @ 2016-07-03 23:49 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Fri, Jul 01, 2016 at 02:41:00PM +0800, Ganesh Mahendran wrote:
> add per class compact trace event. It will show how many zs pages
> isolated, how many zs pages reclaimed.

I don't know what you want with this event trace.

What's the relation betwwen the number of zspage isolated and zspage
reclaimed? IOW, with that, what do you want to know?

> 
> ----
>   <...>-627   [002] ....   192.641122: zs_compact_start: pool zram0
>   <...>-627   [002] ....   192.641166: zs_compact_class: class 254: 0 zspage isolated, 0 reclaimed
>   <...>-627   [002] ....   192.641169: zs_compact_class: class 202: 0 zspage isolated, 0 reclaimed
>   <...>-627   [002] ....   192.641172: zs_compact_class: class 190: 0 zspage isolated, 0 reclaimed
>   <...>-627   [002] ....   192.641180: zs_compact_class: class 168: 3 zspage isolated, 1 reclaimed
>   <...>-627   [002] ....   192.641190: zs_compact_class: class 151: 3 zspage isolated, 1 reclaimed
>   <...>-627   [002] ....   192.641201: zs_compact_class: class 144: 6 zspage isolated, 1 reclaimed
>   <...>-627   [002] ....   192.641224: zs_compact_class: class 126: 24 zspage isolated, 12 reclaimed
>   <...>-627   [002] ....   192.641261: zs_compact_class: class 111: 10 zspage isolated, 2 reclaimed
> kswapd0-627   [002] ....   192.641333: zs_compact_class: class 107: 38 zspage isolated, 8 reclaimed
> kswapd0-627   [002] ....   192.641415: zs_compact_class: class 100: 45 zspage isolated, 12 reclaimed
> kswapd0-627   [002] ....   192.641481: zs_compact_class: class  94: 24 zspage isolated, 5 reclaimed
> kswapd0-627   [002] ....   192.641568: zs_compact_class: class  91: 69 zspage isolated, 14 reclaimed
> kswapd0-627   [002] ....   192.641688: zs_compact_class: class  83: 120 zspage isolated, 47 reclaimed
> kswapd0-627   [002] ....   192.641765: zs_compact_class: class  76: 34 zspage isolated, 5 reclaimed
> kswapd0-627   [002] ....   192.641832: zs_compact_class: class  74: 34 zspage isolated, 6 reclaimed
> kswapd0-627   [002] ....   192.641958: zs_compact_class: class  71: 66 zspage isolated, 17 reclaimed
> kswapd0-627   [002] ....   192.642000: zs_compact_class: class  67: 17 zspage isolated, 3 reclaimed
> kswapd0-627   [002] ....   192.642063: zs_compact_class: class  66: 29 zspage isolated, 5 reclaimed
> kswapd0-627   [002] ....   192.642113: zs_compact_class: class  62: 38 zspage isolated, 12 reclaimed
> kswapd0-627   [002] ....   192.642143: zs_compact_class: class  58: 8 zspage isolated, 1 reclaimed
> kswapd0-627   [002] ....   192.642176: zs_compact_class: class  57: 25 zspage isolated, 5 reclaimed
> kswapd0-627   [002] ....   192.642184: zs_compact_class: class  54: 11 zspage isolated, 2 reclaimed
> kswapd0-627   [002] ....   192.642191: zs_compact_class: class  52: 5 zspage isolated, 1 reclaimed
> kswapd0-627   [002] ....   192.642201: zs_compact_class: class  51: 6 zspage isolated, 1 reclaimed
> kswapd0-627   [002] ....   192.642211: zs_compact_class: class  49: 11 zspage isolated, 3 reclaimed
> kswapd0-627   [002] ....   192.642216: zs_compact_class: class  46: 2 zspage isolated, 1 reclaimed
> kswapd0-627   [002] ....   192.642218: zs_compact_class: class  44: 0 zspage isolated, 0 reclaimed
> kswapd0-627   [002] ....   192.642221: zs_compact_class: class  43: 0 zspage isolated, 0 reclaimed
>   ...
> ----
> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> ---
>  include/trace/events/zsmalloc.h | 24 ++++++++++++++++++++++++
>  mm/zsmalloc.c                   | 16 +++++++++++++++-
>  2 files changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/include/trace/events/zsmalloc.h b/include/trace/events/zsmalloc.h
> index c7a39f4..e745246 100644
> --- a/include/trace/events/zsmalloc.h
> +++ b/include/trace/events/zsmalloc.h
> @@ -46,6 +46,30 @@ TRACE_EVENT(zs_compact_end,
>  		  __entry->pages_compacted)
>  );
>  
> +TRACE_EVENT(zs_compact_class,
> +
> +	TP_PROTO(int class, unsigned long zspage_isolated, unsigned long zspage_reclaimed),
> +
> +	TP_ARGS(class, zspage_isolated, zspage_reclaimed),
> +
> +	TP_STRUCT__entry(
> +		__field(int, class)
> +		__field(unsigned long, zspage_isolated)
> +		__field(unsigned long, zspage_reclaimed)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->class = class;
> +		__entry->zspage_isolated = zspage_isolated;
> +		__entry->zspage_reclaimed = zspage_reclaimed;
> +	),
> +
> +	TP_printk("class %3d: %ld zspage isolated, %ld zspage reclaimed",
> +		  __entry->class,
> +		  __entry->zspage_isolated,
> +		  __entry->zspage_reclaimed)
> +);
> +
>  #endif /* _TRACE_ZSMALLOC_H */
>  
>  /* This part must be outside protection */
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index c7f79d5..405baa5 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1780,6 +1780,11 @@ struct zs_compact_control {
>  	 /* Starting object index within @s_page which used for live object
>  	  * in the subpage. */
>  	int index;
> +
> +	/* zspage isolated */
> +	unsigned long nr_isolated;
> +	/* zspage reclaimed */
> +	unsigned long nr_reclaimed;
>  };
>  
>  static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> @@ -2272,7 +2277,10 @@ static unsigned long zs_can_compact(struct size_class *class)
>  
>  static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  {
> -	struct zs_compact_control cc;
> +	struct zs_compact_control cc = {
> +		.nr_isolated = 0,
> +		.nr_reclaimed = 0,
> +	};
>  	struct zspage *src_zspage;
>  	struct zspage *dst_zspage = NULL;
>  
> @@ -2282,10 +2290,13 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  		if (!zs_can_compact(class))
>  			break;
>  
> +		cc.nr_isolated++;
> +
>  		cc.index = 0;
>  		cc.s_page = get_first_page(src_zspage);
>  
>  		while ((dst_zspage = isolate_zspage(class, false))) {
> +			cc.nr_isolated++;
>  			cc.d_page = get_first_page(dst_zspage);
>  			/*
>  			 * If there is no more space in dst_page, resched
> @@ -2304,6 +2315,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  		putback_zspage(class, dst_zspage);
>  		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
>  			free_zspage(pool, class, src_zspage);
> +			cc.nr_reclaimed++;
>  			pool->stats.pages_compacted += class->pages_per_zspage;
>  		}
>  		spin_unlock(&class->lock);
> @@ -2315,6 +2327,8 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  		putback_zspage(class, src_zspage);
>  
>  	spin_unlock(&class->lock);
> +
> +	trace_zs_compact_class(class->index, cc.nr_isolated, cc.nr_reclaimed);
>  }
>  
>  unsigned long zs_compact(struct zs_pool *pool)
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj
  2016-07-01  6:41 ` [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
@ 2016-07-03 23:57   ` Minchan Kim
  2016-07-04  3:23     ` Ganesh Mahendran
  0 siblings, 1 reply; 19+ messages in thread
From: Minchan Kim @ 2016-07-03 23:57 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Fri, Jul 01, 2016 at 02:41:01PM +0800, Ganesh Mahendran wrote:
> the obj index value should be updated after return from
> find_alloced_obj()
 
        to avoid CPU buring caused by unnecessary object scanning.

Description should include what's the goal.

> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> ---
>  mm/zsmalloc.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 405baa5..5c96ed1 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1744,15 +1744,16 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
>   * return handle.
>   */
>  static unsigned long find_alloced_obj(struct size_class *class,
> -					struct page *page, int index)
> +					struct page *page, int *index)
>  {
>  	unsigned long head;
>  	int offset = 0;
> +	int objidx = *index;

Nit:

We have used obj_idx so I prefer it for consistency with others.

Suggestion:
Could you mind changing index in zs_compact_control and
migrate_zspage with obj_idx in this chance?

Strictly speaking, such clean up is separate patch but I don't mind
mixing them here(Of course, you will send it as another clean up patch,
it would be better). If you mind, just let it leave as is. Sometime,
I wil do it.

>  	unsigned long handle = 0;
>  	void *addr = kmap_atomic(page);
>  
>  	offset = get_first_obj_offset(page);
> -	offset += class->size * index;
> +	offset += class->size * objidx;
>  
>  	while (offset < PAGE_SIZE) {
>  		head = obj_to_head(page, addr + offset);
> @@ -1764,9 +1765,11 @@ static unsigned long find_alloced_obj(struct size_class *class,
>  		}
>  
>  		offset += class->size;
> -		index++;
> +		objidx++;
>  	}
>  
> +	*index = objidx;

We can do this out of kmap section right before returing handle.

Thanks!

> +
>  	kunmap_atomic(addr);
>  	return handle;
>  }
> @@ -1794,11 +1797,11 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>  	unsigned long handle;
>  	struct page *s_page = cc->s_page;
>  	struct page *d_page = cc->d_page;
> -	unsigned long index = cc->index;
> +	unsigned int index = cc->index;
>  	int ret = 0;
>  
>  	while (1) {
> -		handle = find_alloced_obj(class, s_page, index);
> +		handle = find_alloced_obj(class, s_page, &index);
>  		if (!handle) {
>  			s_page = get_next_page(s_page);
>  			if (!s_page)
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 4/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects
  2016-07-01  6:41 ` [PATCH 4/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
@ 2016-07-03 23:58   ` Minchan Kim
  0 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2016-07-03 23:58 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Fri, Jul 01, 2016 at 02:41:02PM +0800, Ganesh Mahendran wrote:
> num of max objects in zspage is stored in each size_class now.
> So there is no need to re-calculate it.
> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/8] mm/zsmalloc: avoid calculate max objects of zspage twice
  2016-07-01  6:41 ` [PATCH 5/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
@ 2016-07-04  0:03   ` Minchan Kim
  2016-07-04  3:26     ` Ganesh Mahendran
  0 siblings, 1 reply; 19+ messages in thread
From: Minchan Kim @ 2016-07-04  0:03 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Fri, Jul 01, 2016 at 02:41:03PM +0800, Ganesh Mahendran wrote:
> Currently, if a class can not be merged, the max objects of zspage
> in that class may be calculated twice.
> 
> This patch calculate max objects of zspage at the begin, and pass
> the value to can_merge() to decide whether the class can be merged.
> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> ---
>  mm/zsmalloc.c | 21 ++++++++++-----------
>  1 file changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 50283b1..2690914 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1362,16 +1362,14 @@ static void init_zs_size_classes(void)
>  	zs_size_classes = nr;
>  }
>  
> -static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
> +static bool can_merge(struct size_class *prev, int pages_per_zspage,
> +					int objs_per_zspage)
>  {
> -	if (prev->pages_per_zspage != pages_per_zspage)
> -		return false;
> -
> -	if (prev->objs_per_zspage
> -		!= get_maxobj_per_zspage(size, pages_per_zspage))
> -		return false;
> +	if (prev->pages_per_zspage == pages_per_zspage &&
> +		prev->objs_per_zspage == objs_per_zspage)
> +		return true;
>  
> -	return true;
> +	return false;
>  }
>  
>  static bool zspage_full(struct size_class *class, struct zspage *zspage)
> @@ -2460,6 +2458,7 @@ struct zs_pool *zs_create_pool(const char *name)
>  	for (i = zs_size_classes - 1; i >= 0; i--) {
>  		int size;
>  		int pages_per_zspage;
> +		int objs_per_zspage;
>  		struct size_class *class;
>  		int fullness = 0;
>  
> @@ -2467,6 +2466,7 @@ struct zs_pool *zs_create_pool(const char *name)
>  		if (size > ZS_MAX_ALLOC_SIZE)
>  			size = ZS_MAX_ALLOC_SIZE;
>  		pages_per_zspage = get_pages_per_zspage(size);
> +		objs_per_zspage = get_maxobj_per_zspage(size, pages_per_zspage);

So, user of get_maxobj_per_zspage is only here? If so, let's remove
get_maxobj_per_zspage to prevent misuse in future. Instead, use open code
here.


>  
>  		/*
>  		 * size_class is used for normal zsmalloc operation such
> @@ -2478,7 +2478,7 @@ struct zs_pool *zs_create_pool(const char *name)
>  		 * previous size_class if possible.
>  		 */
>  		if (prev_class) {
> -			if (can_merge(prev_class, size, pages_per_zspage)) {
> +			if (can_merge(prev_class, pages_per_zspage, objs_per_zspage)) {
>  				pool->size_class[i] = prev_class;
>  				continue;
>  			}
> @@ -2491,8 +2491,7 @@ struct zs_pool *zs_create_pool(const char *name)
>  		class->size = size;
>  		class->index = i;
>  		class->pages_per_zspage = pages_per_zspage;
> -		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
> -							class->pages_per_zspage);
> +		class->objs_per_zspage = objs_per_zspage;
>  		spin_lock_init(&class->lock);
>  		pool->size_class[i] = class;
>  		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 6/8] mm/zsmalloc: keep comments consistent with code
  2016-07-01  6:41 ` [PATCH 6/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
@ 2016-07-04  0:05   ` Minchan Kim
  2016-07-04  3:32     ` Ganesh Mahendran
  0 siblings, 1 reply; 19+ messages in thread
From: Minchan Kim @ 2016-07-04  0:05 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Fri, Jul 01, 2016 at 02:41:04PM +0800, Ganesh Mahendran wrote:
> some minor change of comments:
> 1). update zs_malloc(),zs_create_pool() function header
> 2). update "Usage of struct page fields"
> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> ---
>  mm/zsmalloc.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 2690914..6fc631a 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -20,6 +20,7 @@
>   *	page->freelist(index): links together all component pages of a zspage
>   *		For the huge page, this is always 0, so we use this field
>   *		to store handle.
> + *	page->units: first object index in a subpage of zspage

Hmm, I want to use offset instead of index.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 7/8] mm/zsmalloc: add __init,__exit attribute
  2016-07-01  6:41 ` [PATCH 7/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
@ 2016-07-04  0:09   ` Minchan Kim
  0 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2016-07-04  0:09 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Fri, Jul 01, 2016 at 02:41:05PM +0800, Ganesh Mahendran wrote:
> Add __init,__exit attribute for function that is only called in
> module init/exit

                   to save memory.

> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> ---
>  mm/zsmalloc.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 6fc631a..1c7460b 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1349,7 +1349,7 @@ static void zs_unregister_cpu_notifier(void)
>  	cpu_notifier_register_done();
>  }
>  
> -static void init_zs_size_classes(void)
> +static void __init init_zs_size_classes(void)
>  {
>  	int nr;
>  
> @@ -1896,7 +1896,7 @@ static struct file_system_type zsmalloc_fs = {
>  	.kill_sb	= kill_anon_super,
>  };
>  
> -static int zsmalloc_mount(void)
> +static int __init zsmalloc_mount(void)
>  {
>  	int ret = 0;
>  
> @@ -1907,7 +1907,7 @@ static int zsmalloc_mount(void)
>  	return ret;
>  }
>  
> -static void zsmalloc_unmount(void)
> +static void __exit zsmalloc_unmount(void)
>  {
>  	kern_unmount(zsmalloc_mnt);
>  }

Couldn't we do it for zs_[un]register_cpu_notifier?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 8/8] mm/zsmalloc: use helper to clear page->flags bit
  2016-07-01  6:41 ` [PATCH 8/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
@ 2016-07-04  0:11   ` Minchan Kim
  0 siblings, 0 replies; 19+ messages in thread
From: Minchan Kim @ 2016-07-04  0:11 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Fri, Jul 01, 2016 at 02:41:06PM +0800, Ganesh Mahendran wrote:
> user ClearPagePrivate/ClearPagePrivate2 helper to clear
> PG_private/PG_private_2 in page->flags
> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/8] mm/zsmalloc: add per class compact trace event
  2016-07-03 23:49   ` Minchan Kim
@ 2016-07-04  3:12     ` Ganesh Mahendran
  0 siblings, 0 replies; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-04  3:12 UTC (permalink / raw)
  To: Minchan Kim
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

Hi, Minchan:

On Mon, Jul 04, 2016 at 08:49:21AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:00PM +0800, Ganesh Mahendran wrote:
> > add per class compact trace event. It will show how many zs pages
> > isolated, how many zs pages reclaimed.
> 
> I don't know what you want with this event trace.
> 
> What's the relation betwwen the number of zspage isolated and zspage
> reclaimed? IOW, with that, what do you want to know?

This is only for our internal debug. And it is not common for everyone.

Please ignore this patch. Sorry for the noise.

Thanks

> 
> > 
> > ----
> >   <...>-627   [002] ....   192.641122: zs_compact_start: pool zram0
> >   <...>-627   [002] ....   192.641166: zs_compact_class: class 254: 0 zspage isolated, 0 reclaimed
> >   <...>-627   [002] ....   192.641169: zs_compact_class: class 202: 0 zspage isolated, 0 reclaimed
> >   <...>-627   [002] ....   192.641172: zs_compact_class: class 190: 0 zspage isolated, 0 reclaimed
> >   <...>-627   [002] ....   192.641180: zs_compact_class: class 168: 3 zspage isolated, 1 reclaimed
> >   <...>-627   [002] ....   192.641190: zs_compact_class: class 151: 3 zspage isolated, 1 reclaimed
> >   <...>-627   [002] ....   192.641201: zs_compact_class: class 144: 6 zspage isolated, 1 reclaimed
> >   <...>-627   [002] ....   192.641224: zs_compact_class: class 126: 24 zspage isolated, 12 reclaimed
> >   <...>-627   [002] ....   192.641261: zs_compact_class: class 111: 10 zspage isolated, 2 reclaimed
> > kswapd0-627   [002] ....   192.641333: zs_compact_class: class 107: 38 zspage isolated, 8 reclaimed
> > kswapd0-627   [002] ....   192.641415: zs_compact_class: class 100: 45 zspage isolated, 12 reclaimed
> > kswapd0-627   [002] ....   192.641481: zs_compact_class: class  94: 24 zspage isolated, 5 reclaimed
> > kswapd0-627   [002] ....   192.641568: zs_compact_class: class  91: 69 zspage isolated, 14 reclaimed
> > kswapd0-627   [002] ....   192.641688: zs_compact_class: class  83: 120 zspage isolated, 47 reclaimed
> > kswapd0-627   [002] ....   192.641765: zs_compact_class: class  76: 34 zspage isolated, 5 reclaimed
> > kswapd0-627   [002] ....   192.641832: zs_compact_class: class  74: 34 zspage isolated, 6 reclaimed
> > kswapd0-627   [002] ....   192.641958: zs_compact_class: class  71: 66 zspage isolated, 17 reclaimed
> > kswapd0-627   [002] ....   192.642000: zs_compact_class: class  67: 17 zspage isolated, 3 reclaimed
> > kswapd0-627   [002] ....   192.642063: zs_compact_class: class  66: 29 zspage isolated, 5 reclaimed
> > kswapd0-627   [002] ....   192.642113: zs_compact_class: class  62: 38 zspage isolated, 12 reclaimed
> > kswapd0-627   [002] ....   192.642143: zs_compact_class: class  58: 8 zspage isolated, 1 reclaimed
> > kswapd0-627   [002] ....   192.642176: zs_compact_class: class  57: 25 zspage isolated, 5 reclaimed
> > kswapd0-627   [002] ....   192.642184: zs_compact_class: class  54: 11 zspage isolated, 2 reclaimed
> > kswapd0-627   [002] ....   192.642191: zs_compact_class: class  52: 5 zspage isolated, 1 reclaimed
> > kswapd0-627   [002] ....   192.642201: zs_compact_class: class  51: 6 zspage isolated, 1 reclaimed
> > kswapd0-627   [002] ....   192.642211: zs_compact_class: class  49: 11 zspage isolated, 3 reclaimed
> > kswapd0-627   [002] ....   192.642216: zs_compact_class: class  46: 2 zspage isolated, 1 reclaimed
> > kswapd0-627   [002] ....   192.642218: zs_compact_class: class  44: 0 zspage isolated, 0 reclaimed
> > kswapd0-627   [002] ....   192.642221: zs_compact_class: class  43: 0 zspage isolated, 0 reclaimed
> >   ...
> > ----
> > 
> > Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> > ---
> >  include/trace/events/zsmalloc.h | 24 ++++++++++++++++++++++++
> >  mm/zsmalloc.c                   | 16 +++++++++++++++-
> >  2 files changed, 39 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/trace/events/zsmalloc.h b/include/trace/events/zsmalloc.h
> > index c7a39f4..e745246 100644
> > --- a/include/trace/events/zsmalloc.h
> > +++ b/include/trace/events/zsmalloc.h
> > @@ -46,6 +46,30 @@ TRACE_EVENT(zs_compact_end,
> >  		  __entry->pages_compacted)
> >  );
> >  
> > +TRACE_EVENT(zs_compact_class,
> > +
> > +	TP_PROTO(int class, unsigned long zspage_isolated, unsigned long zspage_reclaimed),
> > +
> > +	TP_ARGS(class, zspage_isolated, zspage_reclaimed),
> > +
> > +	TP_STRUCT__entry(
> > +		__field(int, class)
> > +		__field(unsigned long, zspage_isolated)
> > +		__field(unsigned long, zspage_reclaimed)
> > +	),
> > +
> > +	TP_fast_assign(
> > +		__entry->class = class;
> > +		__entry->zspage_isolated = zspage_isolated;
> > +		__entry->zspage_reclaimed = zspage_reclaimed;
> > +	),
> > +
> > +	TP_printk("class %3d: %ld zspage isolated, %ld zspage reclaimed",
> > +		  __entry->class,
> > +		  __entry->zspage_isolated,
> > +		  __entry->zspage_reclaimed)
> > +);
> > +
> >  #endif /* _TRACE_ZSMALLOC_H */
> >  
> >  /* This part must be outside protection */
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index c7f79d5..405baa5 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1780,6 +1780,11 @@ struct zs_compact_control {
> >  	 /* Starting object index within @s_page which used for live object
> >  	  * in the subpage. */
> >  	int index;
> > +
> > +	/* zspage isolated */
> > +	unsigned long nr_isolated;
> > +	/* zspage reclaimed */
> > +	unsigned long nr_reclaimed;
> >  };
> >  
> >  static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> > @@ -2272,7 +2277,10 @@ static unsigned long zs_can_compact(struct size_class *class)
> >  
> >  static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> >  {
> > -	struct zs_compact_control cc;
> > +	struct zs_compact_control cc = {
> > +		.nr_isolated = 0,
> > +		.nr_reclaimed = 0,
> > +	};
> >  	struct zspage *src_zspage;
> >  	struct zspage *dst_zspage = NULL;
> >  
> > @@ -2282,10 +2290,13 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> >  		if (!zs_can_compact(class))
> >  			break;
> >  
> > +		cc.nr_isolated++;
> > +
> >  		cc.index = 0;
> >  		cc.s_page = get_first_page(src_zspage);
> >  
> >  		while ((dst_zspage = isolate_zspage(class, false))) {
> > +			cc.nr_isolated++;
> >  			cc.d_page = get_first_page(dst_zspage);
> >  			/*
> >  			 * If there is no more space in dst_page, resched
> > @@ -2304,6 +2315,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> >  		putback_zspage(class, dst_zspage);
> >  		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
> >  			free_zspage(pool, class, src_zspage);
> > +			cc.nr_reclaimed++;
> >  			pool->stats.pages_compacted += class->pages_per_zspage;
> >  		}
> >  		spin_unlock(&class->lock);
> > @@ -2315,6 +2327,8 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> >  		putback_zspage(class, src_zspage);
> >  
> >  	spin_unlock(&class->lock);
> > +
> > +	trace_zs_compact_class(class->index, cc.nr_isolated, cc.nr_reclaimed);
> >  }
> >  
> >  unsigned long zs_compact(struct zs_pool *pool)
> > -- 
> > 1.9.1
> > 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj
  2016-07-03 23:57   ` Minchan Kim
@ 2016-07-04  3:23     ` Ganesh Mahendran
  0 siblings, 0 replies; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-04  3:23 UTC (permalink / raw)
  To: Minchan Kim
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Mon, Jul 04, 2016 at 08:57:04AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:01PM +0800, Ganesh Mahendran wrote:
> > the obj index value should be updated after return from
> > find_alloced_obj()
>  
>         to avoid CPU buring caused by unnecessary object scanning.
> 
> Description should include what's the goal.

Thanks for your reminder.

> 
> > 
> > Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> > ---
> >  mm/zsmalloc.c | 13 ++++++++-----
> >  1 file changed, 8 insertions(+), 5 deletions(-)
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 405baa5..5c96ed1 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1744,15 +1744,16 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
> >   * return handle.
> >   */
> >  static unsigned long find_alloced_obj(struct size_class *class,
> > -					struct page *page, int index)
> > +					struct page *page, int *index)
> >  {
> >  	unsigned long head;
> >  	int offset = 0;
> > +	int objidx = *index;
> 
> Nit:
> 
> We have used obj_idx so I prefer it for consistency with others.

will do it.

> 
> Suggestion:
> Could you mind changing index in zs_compact_control and
> migrate_zspage with obj_idx in this chance?

I will add a clean up patch in this patchset.

> 
> Strictly speaking, such clean up is separate patch but I don't mind
> mixing them here(Of course, you will send it as another clean up patch,
> it would be better). If you mind, just let it leave as is. Sometime,
> I wil do it.
> 
> >  	unsigned long handle = 0;
> >  	void *addr = kmap_atomic(page);
> >  
> >  	offset = get_first_obj_offset(page);
> > -	offset += class->size * index;
> > +	offset += class->size * objidx;
> >  
> >  	while (offset < PAGE_SIZE) {
> >  		head = obj_to_head(page, addr + offset);
> > @@ -1764,9 +1765,11 @@ static unsigned long find_alloced_obj(struct size_class *class,
> >  		}
> >  
> >  		offset += class->size;
> > -		index++;
> > +		objidx++;
> >  	}
> >  
> > +	*index = objidx;
> 
> We can do this out of kmap section right before returing handle.

That's right. I will send a V2 patch soon.

Thanks.

> 
> Thanks!
> 
> > +
> >  	kunmap_atomic(addr);
> >  	return handle;
> >  }
> > @@ -1794,11 +1797,11 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> >  	unsigned long handle;
> >  	struct page *s_page = cc->s_page;
> >  	struct page *d_page = cc->d_page;
> > -	unsigned long index = cc->index;
> > +	unsigned int index = cc->index;
> >  	int ret = 0;
> >  
> >  	while (1) {
> > -		handle = find_alloced_obj(class, s_page, index);
> > +		handle = find_alloced_obj(class, s_page, &index);
> >  		if (!handle) {
> >  			s_page = get_next_page(s_page);
> >  			if (!s_page)
> > -- 
> > 1.9.1
> > 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 5/8] mm/zsmalloc: avoid calculate max objects of zspage twice
  2016-07-04  0:03   ` Minchan Kim
@ 2016-07-04  3:26     ` Ganesh Mahendran
  0 siblings, 0 replies; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-04  3:26 UTC (permalink / raw)
  To: Minchan Kim
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Mon, Jul 04, 2016 at 09:03:18AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:03PM +0800, Ganesh Mahendran wrote:
> > Currently, if a class can not be merged, the max objects of zspage
> > in that class may be calculated twice.
> > 
> > This patch calculate max objects of zspage at the begin, and pass
> > the value to can_merge() to decide whether the class can be merged.
> > 
> > Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> > ---
> >  mm/zsmalloc.c | 21 ++++++++++-----------
> >  1 file changed, 10 insertions(+), 11 deletions(-)
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 50283b1..2690914 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1362,16 +1362,14 @@ static void init_zs_size_classes(void)
> >  	zs_size_classes = nr;
> >  }
> >  
> > -static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
> > +static bool can_merge(struct size_class *prev, int pages_per_zspage,
> > +					int objs_per_zspage)
> >  {
> > -	if (prev->pages_per_zspage != pages_per_zspage)
> > -		return false;
> > -
> > -	if (prev->objs_per_zspage
> > -		!= get_maxobj_per_zspage(size, pages_per_zspage))
> > -		return false;
> > +	if (prev->pages_per_zspage == pages_per_zspage &&
> > +		prev->objs_per_zspage == objs_per_zspage)
> > +		return true;
> >  
> > -	return true;
> > +	return false;
> >  }
> >  
> >  static bool zspage_full(struct size_class *class, struct zspage *zspage)
> > @@ -2460,6 +2458,7 @@ struct zs_pool *zs_create_pool(const char *name)
> >  	for (i = zs_size_classes - 1; i >= 0; i--) {
> >  		int size;
> >  		int pages_per_zspage;
> > +		int objs_per_zspage;
> >  		struct size_class *class;
> >  		int fullness = 0;
> >  
> > @@ -2467,6 +2466,7 @@ struct zs_pool *zs_create_pool(const char *name)
> >  		if (size > ZS_MAX_ALLOC_SIZE)
> >  			size = ZS_MAX_ALLOC_SIZE;
> >  		pages_per_zspage = get_pages_per_zspage(size);
> > +		objs_per_zspage = get_maxobj_per_zspage(size, pages_per_zspage);
> 
> So, user of get_maxobj_per_zspage is only here? If so, let's remove
> get_maxobj_per_zspage to prevent misuse in future. Instead, use open code
> here.

Yes, get_maxobj_per_zspage is only called here. 
I will remove it in V2.

Thanks.

> 
> 
> >  
> >  		/*
> >  		 * size_class is used for normal zsmalloc operation such
> > @@ -2478,7 +2478,7 @@ struct zs_pool *zs_create_pool(const char *name)
> >  		 * previous size_class if possible.
> >  		 */
> >  		if (prev_class) {
> > -			if (can_merge(prev_class, size, pages_per_zspage)) {
> > +			if (can_merge(prev_class, pages_per_zspage, objs_per_zspage)) {
> >  				pool->size_class[i] = prev_class;
> >  				continue;
> >  			}
> > @@ -2491,8 +2491,7 @@ struct zs_pool *zs_create_pool(const char *name)
> >  		class->size = size;
> >  		class->index = i;
> >  		class->pages_per_zspage = pages_per_zspage;
> > -		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
> > -							class->pages_per_zspage);
> > +		class->objs_per_zspage = objs_per_zspage;
> >  		spin_lock_init(&class->lock);
> >  		pool->size_class[i] = class;
> >  		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
> > -- 
> > 1.9.1
> > 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 6/8] mm/zsmalloc: keep comments consistent with code
  2016-07-04  0:05   ` Minchan Kim
@ 2016-07-04  3:32     ` Ganesh Mahendran
  0 siblings, 0 replies; 19+ messages in thread
From: Ganesh Mahendran @ 2016-07-04  3:32 UTC (permalink / raw)
  To: Minchan Kim
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

On Mon, Jul 04, 2016 at 09:05:16AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:04PM +0800, Ganesh Mahendran wrote:
> > some minor change of comments:
> > 1). update zs_malloc(),zs_create_pool() function header
> > 2). update "Usage of struct page fields"
> > 
> > Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> > ---
> >  mm/zsmalloc.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 2690914..6fc631a 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -20,6 +20,7 @@
> >   *	page->freelist(index): links together all component pages of a zspage
> >   *		For the huge page, this is always 0, so we use this field
> >   *		to store handle.
> > + *	page->units: first object index in a subpage of zspage
> 
> Hmm, I want to use offset instead of index.

Yes, it should be offset here. I mixed it with obj index. :)

Thanks

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2016-07-04  3:32 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-01  6:40 [PATCH 1/8] mm/zsmalloc: modify zs compact trace interface Ganesh Mahendran
2016-07-01  6:41 ` [PATCH 2/8] mm/zsmalloc: add per class compact trace event Ganesh Mahendran
2016-07-03 23:49   ` Minchan Kim
2016-07-04  3:12     ` Ganesh Mahendran
2016-07-01  6:41 ` [PATCH 3/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
2016-07-03 23:57   ` Minchan Kim
2016-07-04  3:23     ` Ganesh Mahendran
2016-07-01  6:41 ` [PATCH 4/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
2016-07-03 23:58   ` Minchan Kim
2016-07-01  6:41 ` [PATCH 5/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
2016-07-04  0:03   ` Minchan Kim
2016-07-04  3:26     ` Ganesh Mahendran
2016-07-01  6:41 ` [PATCH 6/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
2016-07-04  0:05   ` Minchan Kim
2016-07-04  3:32     ` Ganesh Mahendran
2016-07-01  6:41 ` [PATCH 7/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
2016-07-04  0:09   ` Minchan Kim
2016-07-01  6:41 ` [PATCH 8/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
2016-07-04  0:11   ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).