linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others
@ 2016-07-06  6:23 Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

This is a cleanup patch. Change "index" to "obj_index" to keep
consistent with others in zsmalloc.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v3: none
v2: none
---
 mm/zsmalloc.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e425de4..3a37977 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1779,7 +1779,7 @@ struct zs_compact_control {
 	struct page *d_page;
 	 /* Starting object index within @s_page which used for live object
 	  * in the subpage. */
-	int index;
+	int obj_idx;
 };
 
 static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
@@ -1789,16 +1789,16 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	unsigned long handle;
 	struct page *s_page = cc->s_page;
 	struct page *d_page = cc->d_page;
-	unsigned long index = cc->index;
+	int obj_idx = cc->obj_idx;
 	int ret = 0;
 
 	while (1) {
-		handle = find_alloced_obj(class, s_page, index);
+		handle = find_alloced_obj(class, s_page, obj_idx);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
 				break;
-			index = 0;
+			obj_idx = 0;
 			continue;
 		}
 
@@ -1812,7 +1812,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		used_obj = handle_to_obj(handle);
 		free_obj = obj_malloc(class, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
-		index++;
+		obj_idx++;
 		/*
 		 * record_obj updates handle's value to free_obj and it will
 		 * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
@@ -1827,7 +1827,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 
 	/* Remember last position in this iteration */
 	cc->s_page = s_page;
-	cc->index = index;
+	cc->obj_idx = obj_idx;
 
 	return ret;
 }
@@ -2282,7 +2282,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		if (!zs_can_compact(class))
 			break;
 
-		cc.index = 0;
+		cc.obj_idx = 0;
 		cc.s_page = get_first_page(src_zspage);
 
 		while ((dst_zspage = isolate_zspage(class, false))) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/8] mm/zsmalloc: take obj index back from find_alloced_obj
  2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
@ 2016-07-06  6:23 ` Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

the obj index value should be updated after return from
find_alloced_obj() to avoid CPU burning caused by unnecessary
object scanning.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v3:
  none
v2:
  - update commit description
---
 mm/zsmalloc.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3a37977..1f144f1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1744,10 +1744,11 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
  * return handle.
  */
 static unsigned long find_alloced_obj(struct size_class *class,
-					struct page *page, int index)
+					struct page *page, int *obj_idx)
 {
 	unsigned long head;
 	int offset = 0;
+	int index = *obj_idx;
 	unsigned long handle = 0;
 	void *addr = kmap_atomic(page);
 
@@ -1768,6 +1769,9 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	}
 
 	kunmap_atomic(addr);
+
+	*obj_idx = index;
+
 	return handle;
 }
 
@@ -1793,7 +1797,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	int ret = 0;
 
 	while (1) {
-		handle = find_alloced_obj(class, s_page, obj_idx);
+		handle = find_alloced_obj(class, s_page, &obj_idx);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects
  2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
@ 2016-07-06  6:23 ` Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

num of max objects in zspage is stored in each size_class now.
So there is no need to re-calculate it.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
---
 mm/zsmalloc.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1f144f1..82ff2c0 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -638,8 +638,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v)
 		freeable = zs_can_compact(class);
 		spin_unlock(&class->lock);
 
-		objs_per_zspage = get_maxobj_per_zspage(class->size,
-				class->pages_per_zspage);
+		objs_per_zspage = class->objs_per_zspage;
 		pages_used = obj_allocated / objs_per_zspage *
 				class->pages_per_zspage;
 
@@ -1017,8 +1016,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 
 	cache_free_zspage(pool, zspage);
 
-	zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
+	zs_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage);
 	atomic_long_sub(class->pages_per_zspage,
 					&pool->pages_allocated);
 }
@@ -1369,7 +1367,7 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
 	if (prev->pages_per_zspage != pages_per_zspage)
 		return false;
 
-	if (get_maxobj_per_zspage(prev->size, prev->pages_per_zspage)
+	if (prev->objs_per_zspage
 		!= get_maxobj_per_zspage(size, pages_per_zspage))
 		return false;
 
@@ -1595,8 +1593,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	record_obj(handle, obj);
 	atomic_long_add(class->pages_per_zspage,
 				&pool->pages_allocated);
-	zs_stat_inc(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
+	zs_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
 
 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
@@ -2268,8 +2265,7 @@ static unsigned long zs_can_compact(struct size_class *class)
 		return 0;
 
 	obj_wasted = obj_allocated - obj_used;
-	obj_wasted /= get_maxobj_per_zspage(class->size,
-			class->pages_per_zspage);
+	obj_wasted /= class->objs_per_zspage;
 
 	return obj_wasted * class->pages_per_zspage;
 }
@@ -2483,8 +2479,8 @@ struct zs_pool *zs_create_pool(const char *name)
 		class->size = size;
 		class->index = i;
 		class->pages_per_zspage = pages_per_zspage;
-		class->objs_per_zspage = class->pages_per_zspage *
-						PAGE_SIZE / class->size;
+		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
+							class->pages_per_zspage);
 		spin_lock_init(&class->lock);
 		pool->size_class[i] = class;
 		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice
  2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
@ 2016-07-06  6:23 ` Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 5/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

Currently, if a class can not be merged, the max objects of zspage
in that class may be calculated twice.

This patch calculate max objects of zspage at the begin, and pass
the value to can_merge() to decide whether the class can be merged.

Also this patch remove function get_maxobj_per_zspage(), as there
is no other place to call this function.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v3:
    none
v2:
    remove get_maxobj_per_zspage()  - Minchan
---
 mm/zsmalloc.c | 26 ++++++++++----------------
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 82ff2c0..82b9977 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -470,11 +470,6 @@ static struct zpool_driver zs_zpool_driver = {
 MODULE_ALIAS("zpool-zsmalloc");
 #endif /* CONFIG_ZPOOL */
 
-static unsigned int get_maxobj_per_zspage(int size, int pages_per_zspage)
-{
-	return pages_per_zspage * PAGE_SIZE / size;
-}
-
 /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
 static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
 
@@ -1362,16 +1357,14 @@ static void init_zs_size_classes(void)
 	zs_size_classes = nr;
 }
 
-static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
+static bool can_merge(struct size_class *prev, int pages_per_zspage,
+					int objs_per_zspage)
 {
-	if (prev->pages_per_zspage != pages_per_zspage)
-		return false;
+	if (prev->pages_per_zspage == pages_per_zspage &&
+		prev->objs_per_zspage == objs_per_zspage)
+		return true;
 
-	if (prev->objs_per_zspage
-		!= get_maxobj_per_zspage(size, pages_per_zspage))
-		return false;
-
-	return true;
+	return false;
 }
 
 static bool zspage_full(struct size_class *class, struct zspage *zspage)
@@ -2448,6 +2441,7 @@ struct zs_pool *zs_create_pool(const char *name)
 	for (i = zs_size_classes - 1; i >= 0; i--) {
 		int size;
 		int pages_per_zspage;
+		int objs_per_zspage;
 		struct size_class *class;
 		int fullness = 0;
 
@@ -2455,6 +2449,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		if (size > ZS_MAX_ALLOC_SIZE)
 			size = ZS_MAX_ALLOC_SIZE;
 		pages_per_zspage = get_pages_per_zspage(size);
+		objs_per_zspage = pages_per_zspage * PAGE_SIZE / size;
 
 		/*
 		 * size_class is used for normal zsmalloc operation such
@@ -2466,7 +2461,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		 * previous size_class if possible.
 		 */
 		if (prev_class) {
-			if (can_merge(prev_class, size, pages_per_zspage)) {
+			if (can_merge(prev_class, pages_per_zspage, objs_per_zspage)) {
 				pool->size_class[i] = prev_class;
 				continue;
 			}
@@ -2479,8 +2474,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		class->size = size;
 		class->index = i;
 		class->pages_per_zspage = pages_per_zspage;
-		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
-							class->pages_per_zspage);
+		class->objs_per_zspage = objs_per_zspage;
 		spin_lock_init(&class->lock);
 		pool->size_class[i] = class;
 		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 5/8] mm/zsmalloc: keep comments consistent with code
  2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (2 preceding siblings ...)
  2016-07-06  6:23 ` [PATCH v3 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
@ 2016-07-06  6:23 ` Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

some minor change of comments:
1). update zs_malloc(),zs_create_pool() function header
2). update "Usage of struct page fields"

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v3:
    none
v2:
    change *object index* to *object offset* - Minchan
---
 mm/zsmalloc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 82b9977..ded312b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -20,6 +20,7 @@
  *	page->freelist(index): links together all component pages of a zspage
  *		For the huge page, this is always 0, so we use this field
  *		to store handle.
+ *	page->units: first object offset in a subpage of zspage
  *
  * Usage of struct page flags:
  *	PG_private: identifies the first component page
@@ -140,9 +141,6 @@
  */
 #define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> CLASS_BITS)
 
-/*
- * We do not maintain any list for completely empty or full pages
- */
 enum fullness_group {
 	ZS_EMPTY,
 	ZS_ALMOST_EMPTY,
@@ -1535,6 +1533,7 @@ static unsigned long obj_malloc(struct size_class *class,
  * zs_malloc - Allocate block of given size from pool.
  * @pool: pool to allocate from
  * @size: size of block to allocate
+ * @gfp: gfp flags when allocating object
  *
  * On success, handle to the allocated object is returned,
  * otherwise 0.
@@ -2401,7 +2400,7 @@ static int zs_register_shrinker(struct zs_pool *pool)
 
 /**
  * zs_create_pool - Creates an allocation pool to work from.
- * @flags: allocation flags used to allocate pool metadata
+ * @name: pool name to be created
  *
  * This function must be called before anything when using
  * the zsmalloc allocator.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute
  2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (3 preceding siblings ...)
  2016-07-06  6:23 ` [PATCH v3 5/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
@ 2016-07-06  6:23 ` Ganesh Mahendran
  2016-07-06  8:01   ` kbuild test robot
                     ` (2 more replies)
  2016-07-06  6:23 ` [PATCH v3 7/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 8/8] mm/zsmalloc: add per-class compact trace event Ganesh Mahendran
  6 siblings, 3 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

Add __init,__exit attribute for function that only called in
module init/exit to save memory.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
----
v3:
    revert change in v2 - Sergey
v2:
    add __init/__exit for zs_register_cpu_notifier/zs_unregister_cpu_notifier
---
 mm/zsmalloc.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ded312b..46526b9 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1344,7 +1344,7 @@ static void zs_unregister_cpu_notifier(void)
 	cpu_notifier_register_done();
 }
 
-static void init_zs_size_classes(void)
+static void __init init_zs_size_classes(void)
 {
 	int nr;
 
@@ -1887,7 +1887,7 @@ static struct file_system_type zsmalloc_fs = {
 	.kill_sb	= kill_anon_super,
 };
 
-static int zsmalloc_mount(void)
+static int __init zsmalloc_mount(void)
 {
 	int ret = 0;
 
@@ -1898,7 +1898,7 @@ static int zsmalloc_mount(void)
 	return ret;
 }
 
-static void zsmalloc_unmount(void)
+static void __exit zsmalloc_unmount(void)
 {
 	kern_unmount(zsmalloc_mnt);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 7/8] mm/zsmalloc: use helper to clear page->flags bit
  2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (4 preceding siblings ...)
  2016-07-06  6:23 ` [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
@ 2016-07-06  6:23 ` Ganesh Mahendran
  2016-07-06  6:23 ` [PATCH v3 8/8] mm/zsmalloc: add per-class compact trace event Ganesh Mahendran
  6 siblings, 0 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

user ClearPagePrivate/ClearPagePrivate2 helper to clear
PG_private/PG_private_2 in page->flags

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
----
v3: none
v2: none
---
 mm/zsmalloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 46526b9..17d3f53 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -940,8 +940,8 @@ static void unpin_tag(unsigned long handle)
 static void reset_page(struct page *page)
 {
 	__ClearPageMovable(page);
-	clear_bit(PG_private, &page->flags);
-	clear_bit(PG_private_2, &page->flags);
+	ClearPagePrivate(page);
+	ClearPagePrivate2(page);
 	set_page_private(page, 0);
 	page_mapcount_reset(page);
 	ClearPageHugeObject(page);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 8/8] mm/zsmalloc: add per-class compact trace event
  2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (5 preceding siblings ...)
  2016-07-06  6:23 ` [PATCH v3 7/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
@ 2016-07-06  6:23 ` Ganesh Mahendran
  2016-07-07  7:44   ` Minchan Kim
  6 siblings, 1 reply; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  6:23 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo,
	Ganesh Mahendran

add per-class compact trace event to get scanned objects and freed pages
number.
trace log is like below:
----
         kswapd0-629   [001] ....   293.161053: zs_compact_start: pool zram0
         kswapd0-629   [001] ....   293.161056: zs_compact: class 254: 0 objects scanned, 0 pages freed
         kswapd0-629   [001] ....   293.161057: zs_compact: class 202: 0 objects scanned, 0 pages freed
         kswapd0-629   [001] ....   293.161062: zs_compact: class 190: 1 objects scanned, 3 pages freed
         kswapd0-629   [001] ....   293.161063: zs_compact: class 168: 0 objects scanned, 0 pages freed
         kswapd0-629   [001] ....   293.161065: zs_compact: class 151: 0 objects scanned, 0 pages freed
         kswapd0-629   [001] ....   293.161073: zs_compact: class 144: 4 objects scanned, 8 pages freed
         kswapd0-629   [001] ....   293.161087: zs_compact: class 126: 20 objects scanned, 10 pages freed
         kswapd0-629   [001] ....   293.161095: zs_compact: class 111: 6 objects scanned, 8 pages freed
         kswapd0-629   [001] ....   293.161122: zs_compact: class 107: 27 objects scanned, 27 pages freed
         kswapd0-629   [001] ....   293.161157: zs_compact: class 100: 36 objects scanned, 24 pages freed
         kswapd0-629   [001] ....   293.161173: zs_compact: class  94: 10 objects scanned, 15 pages freed
         kswapd0-629   [001] ....   293.161221: zs_compact: class  91: 30 objects scanned, 40 pages freed
         kswapd0-629   [001] ....   293.161256: zs_compact: class  83: 120 objects scanned, 30 pages freed
         kswapd0-629   [001] ....   293.161266: zs_compact: class  76: 8 objects scanned, 8 pages freed
         kswapd0-629   [001] ....   293.161282: zs_compact: class  74: 20 objects scanned, 15 pages freed
         kswapd0-629   [001] ....   293.161306: zs_compact: class  71: 40 objects scanned, 20 pages freed
         kswapd0-629   [001] ....   293.161313: zs_compact: class  67: 8 objects scanned, 6 pages freed
...
         kswapd0-629   [001] ....   293.161454: zs_compact: class   0: 0 objects scanned, 0 pages freed
         kswapd0-629   [001] ....   293.161455: zs_compact_end: pool zram0: 301 pages compacted
----

Also this patch changes trace_zsmalloc_compact_start[end] to
trace_zs_compact_start[end] to keep function naming consistent
with others in zsmalloc.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
----
v3:
    add per-class compact trace event - Minchan

    I put this patch from 1/8 to 8/8, since this patch depends on below patch:
       mm/zsmalloc: use obj_index to keep consistent with others
       mm/zsmalloc: take obj index back from find_alloced_obj

v2:
    update commit description
---
 include/trace/events/zsmalloc.h | 40 ++++++++++++++++++++++++++++++----------
 mm/zsmalloc.c                   | 26 ++++++++++++++++++--------
 2 files changed, 48 insertions(+), 18 deletions(-)

diff --git a/include/trace/events/zsmalloc.h b/include/trace/events/zsmalloc.h
index 3b6f14e..96fcca8 100644
--- a/include/trace/events/zsmalloc.h
+++ b/include/trace/events/zsmalloc.h
@@ -7,7 +7,7 @@
 #include <linux/types.h>
 #include <linux/tracepoint.h>
 
-TRACE_EVENT(zsmalloc_compact_start,
+TRACE_EVENT(zs_compact_start,
 
 	TP_PROTO(const char *pool_name),
 
@@ -25,29 +25,49 @@ TRACE_EVENT(zsmalloc_compact_start,
 		  __entry->pool_name)
 );
 
-TRACE_EVENT(zsmalloc_compact_end,
+TRACE_EVENT(zs_compact_end,
 
-	TP_PROTO(const char *pool_name, unsigned long pages_compacted,
-			unsigned long pages_total_compacted),
+	TP_PROTO(const char *pool_name, unsigned long pages_compacted),
 
-	TP_ARGS(pool_name, pages_compacted, pages_total_compacted),
+	TP_ARGS(pool_name, pages_compacted),
 
 	TP_STRUCT__entry(
 		__field(const char *, pool_name)
 		__field(unsigned long, pages_compacted)
-		__field(unsigned long, pages_total_compacted)
 	),
 
 	TP_fast_assign(
 		__entry->pool_name = pool_name;
 		__entry->pages_compacted = pages_compacted;
-		__entry->pages_total_compacted = pages_total_compacted;
 	),
 
-	TP_printk("pool %s: %ld pages compacted(total %ld)",
+	TP_printk("pool %s: %ld pages compacted",
 		  __entry->pool_name,
-		  __entry->pages_compacted,
-		  __entry->pages_total_compacted)
+		  __entry->pages_compacted)
+);
+
+TRACE_EVENT(zs_compact,
+
+	TP_PROTO(int class, unsigned long nr_scanned_obj, unsigned long nr_freed_pages),
+
+	TP_ARGS(class, nr_scanned_obj, nr_freed_pages),
+
+	TP_STRUCT__entry(
+		__field(int, class)
+		__field(unsigned long, nr_scanned_obj)
+		__field(unsigned long, nr_freed_pages)
+	),
+
+	TP_fast_assign(
+		__entry->class = class;
+		__entry->nr_scanned_obj = nr_scanned_obj;
+		__entry->nr_freed_pages = nr_freed_pages;
+	),
+
+	TP_printk("class %3d: %ld objects scanned, %ld pages freed",
+		  __entry->class,
+		  __entry->nr_scanned_obj,
+		  __entry->nr_freed_pages)
 );
 
 #endif /* _TRACE_ZSMALLOC_H */
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 17d3f53..3a1315e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1770,9 +1770,12 @@ struct zs_compact_control {
 	/* Destination page for migration which should be a first page
 	 * of zspage. */
 	struct page *d_page;
-	 /* Starting object index within @s_page which used for live object
-	  * in the subpage. */
+	/* Starting object index within @s_page which used for live object
+	 * in the subpage. */
 	int obj_idx;
+
+	unsigned long nr_scanned_obj;
+	unsigned long nr_freed_pages;
 };
 
 static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
@@ -1818,6 +1821,8 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		obj_free(class, used_obj);
 	}
 
+	cc->nr_scanned_obj += obj_idx - cc->obj_idx;
+
 	/* Remember last position in this iteration */
 	cc->s_page = s_page;
 	cc->obj_idx = obj_idx;
@@ -2264,7 +2269,10 @@ static unsigned long zs_can_compact(struct size_class *class)
 
 static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 {
-	struct zs_compact_control cc;
+	struct zs_compact_control cc = {
+		.nr_scanned_obj = 0,
+		.nr_freed_pages = 0,
+	};
 	struct zspage *src_zspage;
 	struct zspage *dst_zspage = NULL;
 
@@ -2296,7 +2304,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		putback_zspage(class, dst_zspage);
 		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
 			free_zspage(pool, class, src_zspage);
-			pool->stats.pages_compacted += class->pages_per_zspage;
+			cc.nr_freed_pages += class->pages_per_zspage;
 		}
 		spin_unlock(&class->lock);
 		cond_resched();
@@ -2307,6 +2315,9 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		putback_zspage(class, src_zspage);
 
 	spin_unlock(&class->lock);
+
+	pool->stats.pages_compacted += cc.nr_freed_pages;
+	trace_zs_compact(class->index, cc.nr_scanned_obj, cc.nr_freed_pages);
 }
 
 unsigned long zs_compact(struct zs_pool *pool)
@@ -2315,7 +2326,7 @@ unsigned long zs_compact(struct zs_pool *pool)
 	struct size_class *class;
 	unsigned long pages_compacted_before = pool->stats.pages_compacted;
 
-	trace_zsmalloc_compact_start(pool->name);
+	trace_zs_compact_start(pool->name);
 
 	for (i = zs_size_classes - 1; i >= 0; i--) {
 		class = pool->size_class[i];
@@ -2326,9 +2337,8 @@ unsigned long zs_compact(struct zs_pool *pool)
 		__zs_compact(pool, class);
 	}
 
-	trace_zsmalloc_compact_end(pool->name,
-		pool->stats.pages_compacted - pages_compacted_before,
-		pool->stats.pages_compacted);
+	trace_zs_compact_end(pool->name,
+		pool->stats.pages_compacted - pages_compacted_before);
 
 	return pool->stats.pages_compacted;
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute
  2016-07-06  6:23 ` [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
@ 2016-07-06  8:01   ` kbuild test robot
  2016-07-06  8:06   ` kbuild test robot
  2016-07-06  8:20   ` Ganesh Mahendran
  2 siblings, 0 replies; 13+ messages in thread
From: kbuild test robot @ 2016-07-06  8:01 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: kbuild-all, linux-kernel, linux-mm, akpm, minchan, ngupta,
	sergey.senozhatsky.work, rostedt, mingo, Ganesh Mahendran

[-- Attachment #1: Type: text/plain, Size: 1190 bytes --]

Hi,

[auto build test WARNING on next-20160705]
[cannot apply to tip/perf/core v4.7-rc6 v4.7-rc5 v4.7-rc4 v4.7-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ganesh-Mahendran/mm-zsmalloc-use-obj_index-to-keep-consistent-with-others/20160706-150030
config: x86_64-acpi-redef (attached as .config)
compiler: gcc-6 (Debian 6.1.1-1) 6.1.1 20160430
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All warnings (new ones prefixed by >>):

>> WARNING: vmlinux.o(.init.text+0x1f2ca): Section mismatch in reference from the function zs_init() to the function .exit.text:zs_stat_exit()
   The function __init zs_init() references
   a function __exit zs_stat_exit().
   This is often seen when error handling in the init function
   uses functionality in the exit path.
   The fix is often to remove the __exit annotation of
   zs_stat_exit() so it may be used outside an exit section.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 28510 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute
  2016-07-06  6:23 ` [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
  2016-07-06  8:01   ` kbuild test robot
@ 2016-07-06  8:06   ` kbuild test robot
  2016-07-06  8:20   ` Ganesh Mahendran
  2 siblings, 0 replies; 13+ messages in thread
From: kbuild test robot @ 2016-07-06  8:06 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: kbuild-all, linux-kernel, linux-mm, akpm, minchan, ngupta,
	sergey.senozhatsky.work, rostedt, mingo, Ganesh Mahendran

[-- Attachment #1: Type: text/plain, Size: 1211 bytes --]

Hi,

[auto build test WARNING on next-20160705]
[cannot apply to tip/perf/core v4.7-rc6 v4.7-rc5 v4.7-rc4 v4.7-rc6]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ganesh-Mahendran/mm-zsmalloc-use-obj_index-to-keep-consistent-with-others/20160706-150030
config: i386-randconfig-a0-201627 (attached as .config)
compiler: gcc-6 (Debian 6.1.1-1) 6.1.1 20160430
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

>> WARNING: mm/built-in.o(.init.text+0x4c3f): Section mismatch in reference from the function zs_init() to the function .exit.text:zsmalloc_unmount()
   The function __init zs_init() references
   a function __exit zsmalloc_unmount().
   This is often seen when error handling in the init function
   uses functionality in the exit path.
   The fix is often to remove the __exit annotation of
   zsmalloc_unmount() so it may be used outside an exit section.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 24380 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute
  2016-07-06  6:23 ` [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
  2016-07-06  8:01   ` kbuild test robot
  2016-07-06  8:06   ` kbuild test robot
@ 2016-07-06  8:20   ` Ganesh Mahendran
  2 siblings, 0 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-06  8:20 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, rostedt, mingo

On Wed, Jul 06, 2016 at 02:23:51PM +0800, Ganesh Mahendran wrote:
> Add __init,__exit attribute for function that only called in
> module init/exit to save memory.
> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> ----
> v3:
>     revert change in v2 - Sergey
> v2:
>     add __init/__exit for zs_register_cpu_notifier/zs_unregister_cpu_notifier
> ---
>  mm/zsmalloc.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index ded312b..46526b9 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1344,7 +1344,7 @@ static void zs_unregister_cpu_notifier(void)
>  	cpu_notifier_register_done();
>  }
>  
> -static void init_zs_size_classes(void)
> +static void __init init_zs_size_classes(void)
>  {
>  	int nr;
>  
> @@ -1887,7 +1887,7 @@ static struct file_system_type zsmalloc_fs = {
>  	.kill_sb	= kill_anon_super,
>  };
>  
> -static int zsmalloc_mount(void)
> +static int __init zsmalloc_mount(void)
>  {
>  	int ret = 0;
>  
> @@ -1898,7 +1898,7 @@ static int zsmalloc_mount(void)
>  	return ret;
>  }
>  
> -static void zsmalloc_unmount(void)
> +static void __exit zsmalloc_unmount(void)
>  {
>  	kern_unmount(zsmalloc_mnt);
>  }

Sorry, the __exit zsmalloc_umount is called in __init zs_init.

updated patch is :

---
>From 0980a277158958210b671b79e9ddf98699dd6b50 Mon Sep 17 00:00:00 2001
From: Ganesh Mahendran <opensource.ganesh@gmail.com>
Date: Fri, 1 Jul 2016 14:05:50 +0800
Subject: [PATCH] mm/zsmalloc: add __init,__exit attribute

Add __init,__exit attribute for function that only called in
module init/exit to save memory.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
----
v4:
    remove __init/__exit for zsmalloc_mount/zsmalloc_umount
v3:
    revert change in v2 - Sergey
v2:
    add __init/__exit for zs_register_cpu_notifier/zs_unregister_cpu_notifier
---
 mm/zsmalloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ded312b..780eabd 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1344,7 +1344,7 @@ static void zs_unregister_cpu_notifier(void)
 	cpu_notifier_register_done();
 }
 
-static void init_zs_size_classes(void)
+static void __init init_zs_size_classes(void)
 {
 	int nr;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 8/8] mm/zsmalloc: add per-class compact trace event
  2016-07-06  6:23 ` [PATCH v3 8/8] mm/zsmalloc: add per-class compact trace event Ganesh Mahendran
@ 2016-07-07  7:44   ` Minchan Kim
  2016-07-07  9:08     ` Ganesh Mahendran
  0 siblings, 1 reply; 13+ messages in thread
From: Minchan Kim @ 2016-07-07  7:44 UTC (permalink / raw)
  To: Ganesh Mahendran
  Cc: linux-kernel, linux-mm, akpm, ngupta, sergey.senozhatsky.work,
	rostedt, mingo

Hello Ganesh,

On Wed, Jul 06, 2016 at 02:23:53PM +0800, Ganesh Mahendran wrote:
> add per-class compact trace event to get scanned objects and freed pages
> number.
> trace log is like below:
> ----
>          kswapd0-629   [001] ....   293.161053: zs_compact_start: pool zram0
>          kswapd0-629   [001] ....   293.161056: zs_compact: class 254: 0 objects scanned, 0 pages freed
>          kswapd0-629   [001] ....   293.161057: zs_compact: class 202: 0 objects scanned, 0 pages freed
>          kswapd0-629   [001] ....   293.161062: zs_compact: class 190: 1 objects scanned, 3 pages freed
>          kswapd0-629   [001] ....   293.161063: zs_compact: class 168: 0 objects scanned, 0 pages freed
>          kswapd0-629   [001] ....   293.161065: zs_compact: class 151: 0 objects scanned, 0 pages freed
>          kswapd0-629   [001] ....   293.161073: zs_compact: class 144: 4 objects scanned, 8 pages freed
>          kswapd0-629   [001] ....   293.161087: zs_compact: class 126: 20 objects scanned, 10 pages freed
>          kswapd0-629   [001] ....   293.161095: zs_compact: class 111: 6 objects scanned, 8 pages freed
>          kswapd0-629   [001] ....   293.161122: zs_compact: class 107: 27 objects scanned, 27 pages freed
>          kswapd0-629   [001] ....   293.161157: zs_compact: class 100: 36 objects scanned, 24 pages freed
>          kswapd0-629   [001] ....   293.161173: zs_compact: class  94: 10 objects scanned, 15 pages freed
>          kswapd0-629   [001] ....   293.161221: zs_compact: class  91: 30 objects scanned, 40 pages freed
>          kswapd0-629   [001] ....   293.161256: zs_compact: class  83: 120 objects scanned, 30 pages freed
>          kswapd0-629   [001] ....   293.161266: zs_compact: class  76: 8 objects scanned, 8 pages freed
>          kswapd0-629   [001] ....   293.161282: zs_compact: class  74: 20 objects scanned, 15 pages freed
>          kswapd0-629   [001] ....   293.161306: zs_compact: class  71: 40 objects scanned, 20 pages freed
>          kswapd0-629   [001] ....   293.161313: zs_compact: class  67: 8 objects scanned, 6 pages freed
> ...
>          kswapd0-629   [001] ....   293.161454: zs_compact: class   0: 0 objects scanned, 0 pages freed
>          kswapd0-629   [001] ....   293.161455: zs_compact_end: pool zram0: 301 pages compacted
> ----
> 
> Also this patch changes trace_zsmalloc_compact_start[end] to
> trace_zs_compact_start[end] to keep function naming consistent
> with others in zsmalloc.
> 
> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
> ----
> v3:
>     add per-class compact trace event - Minchan
> 
>     I put this patch from 1/8 to 8/8, since this patch depends on below patch:
>        mm/zsmalloc: use obj_index to keep consistent with others
>        mm/zsmalloc: take obj index back from find_alloced_obj
> 

Thanks for looking into this, Ganesh!

Small change I want is to see the number of migrated object rather than
the number of scanning object.

If you don't mind, could you resend it with below?

Thanks.

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3a1315e54057..166232a0aed6 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1774,7 +1774,7 @@ struct zs_compact_control {
 	 * in the subpage. */
 	int obj_idx;
 
-	unsigned long nr_scanned_obj;
+	unsigned long nr_migrated_obj;
 	unsigned long nr_freed_pages;
 };
 
@@ -1809,6 +1809,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		free_obj = obj_malloc(class, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
 		obj_idx++;
+		cc->nr_migrated_obj++;
 		/*
 		 * record_obj updates handle's value to free_obj and it will
 		 * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
@@ -1821,8 +1822,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		obj_free(class, used_obj);
 	}
 
-	cc->nr_scanned_obj += obj_idx - cc->obj_idx;
-
 	/* Remember last position in this iteration */
 	cc->s_page = s_page;
 	cc->obj_idx = obj_idx;
@@ -2270,7 +2269,7 @@ static unsigned long zs_can_compact(struct size_class *class)
 static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 {
 	struct zs_compact_control cc = {
-		.nr_scanned_obj = 0,
+		.nr_migrated_obj = 0,
 		.nr_freed_pages = 0,
 	};
 	struct zspage *src_zspage;
@@ -2317,7 +2316,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 	spin_unlock(&class->lock);
 
 	pool->stats.pages_compacted += cc.nr_freed_pages;
-	trace_zs_compact(class->index, cc.nr_scanned_obj, cc.nr_freed_pages);
+	trace_zs_compact(class->index, cc.nr_migrated_obj, cc.nr_freed_pages);
 }
 
 unsigned long zs_compact(struct zs_pool *pool)

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 8/8] mm/zsmalloc: add per-class compact trace event
  2016-07-07  7:44   ` Minchan Kim
@ 2016-07-07  9:08     ` Ganesh Mahendran
  0 siblings, 0 replies; 13+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:08 UTC (permalink / raw)
  To: Minchan Kim
  Cc: linux-kernel, Linux-MM, Andrew Morton, Nitin Gupta,
	Sergey Senozhatsky, rostedt, mingo

2016-07-07 15:44 GMT+08:00 Minchan Kim <minchan@kernel.org>:
> Hello Ganesh,
>
> On Wed, Jul 06, 2016 at 02:23:53PM +0800, Ganesh Mahendran wrote:
>> add per-class compact trace event to get scanned objects and freed pages
>> number.
>> trace log is like below:
>> ----
>>          kswapd0-629   [001] ....   293.161053: zs_compact_start: pool zram0
>>          kswapd0-629   [001] ....   293.161056: zs_compact: class 254: 0 objects scanned, 0 pages freed
>>          kswapd0-629   [001] ....   293.161057: zs_compact: class 202: 0 objects scanned, 0 pages freed
>>          kswapd0-629   [001] ....   293.161062: zs_compact: class 190: 1 objects scanned, 3 pages freed
>>          kswapd0-629   [001] ....   293.161063: zs_compact: class 168: 0 objects scanned, 0 pages freed
>>          kswapd0-629   [001] ....   293.161065: zs_compact: class 151: 0 objects scanned, 0 pages freed
>>          kswapd0-629   [001] ....   293.161073: zs_compact: class 144: 4 objects scanned, 8 pages freed
>>          kswapd0-629   [001] ....   293.161087: zs_compact: class 126: 20 objects scanned, 10 pages freed
>>          kswapd0-629   [001] ....   293.161095: zs_compact: class 111: 6 objects scanned, 8 pages freed
>>          kswapd0-629   [001] ....   293.161122: zs_compact: class 107: 27 objects scanned, 27 pages freed
>>          kswapd0-629   [001] ....   293.161157: zs_compact: class 100: 36 objects scanned, 24 pages freed
>>          kswapd0-629   [001] ....   293.161173: zs_compact: class  94: 10 objects scanned, 15 pages freed
>>          kswapd0-629   [001] ....   293.161221: zs_compact: class  91: 30 objects scanned, 40 pages freed
>>          kswapd0-629   [001] ....   293.161256: zs_compact: class  83: 120 objects scanned, 30 pages freed
>>          kswapd0-629   [001] ....   293.161266: zs_compact: class  76: 8 objects scanned, 8 pages freed
>>          kswapd0-629   [001] ....   293.161282: zs_compact: class  74: 20 objects scanned, 15 pages freed
>>          kswapd0-629   [001] ....   293.161306: zs_compact: class  71: 40 objects scanned, 20 pages freed
>>          kswapd0-629   [001] ....   293.161313: zs_compact: class  67: 8 objects scanned, 6 pages freed
>> ...
>>          kswapd0-629   [001] ....   293.161454: zs_compact: class   0: 0 objects scanned, 0 pages freed
>>          kswapd0-629   [001] ....   293.161455: zs_compact_end: pool zram0: 301 pages compacted
>> ----
>>
>> Also this patch changes trace_zsmalloc_compact_start[end] to
>> trace_zs_compact_start[end] to keep function naming consistent
>> with others in zsmalloc.
>>
>> Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
>> ----
>> v3:
>>     add per-class compact trace event - Minchan
>>
>>     I put this patch from 1/8 to 8/8, since this patch depends on below patch:
>>        mm/zsmalloc: use obj_index to keep consistent with others
>>        mm/zsmalloc: take obj index back from find_alloced_obj
>>
>
> Thanks for looking into this, Ganesh!
>
> Small change I want is to see the number of migrated object rather than
> the number of scanning object.
>
> If you don't mind, could you resend it with below?

I will resend a patch.

Thanks.

>
> Thanks.
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 3a1315e54057..166232a0aed6 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1774,7 +1774,7 @@ struct zs_compact_control {
>          * in the subpage. */
>         int obj_idx;
>
> -       unsigned long nr_scanned_obj;
> +       unsigned long nr_migrated_obj;
>         unsigned long nr_freed_pages;
>  };
>
> @@ -1809,6 +1809,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>                 free_obj = obj_malloc(class, get_zspage(d_page), handle);
>                 zs_object_copy(class, free_obj, used_obj);
>                 obj_idx++;
> +               cc->nr_migrated_obj++;
>                 /*
>                  * record_obj updates handle's value to free_obj and it will
>                  * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
> @@ -1821,8 +1822,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>                 obj_free(class, used_obj);
>         }
>
> -       cc->nr_scanned_obj += obj_idx - cc->obj_idx;
> -
>         /* Remember last position in this iteration */
>         cc->s_page = s_page;
>         cc->obj_idx = obj_idx;
> @@ -2270,7 +2269,7 @@ static unsigned long zs_can_compact(struct size_class *class)
>  static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  {
>         struct zs_compact_control cc = {
> -               .nr_scanned_obj = 0,
> +               .nr_migrated_obj = 0,
>                 .nr_freed_pages = 0,
>         };
>         struct zspage *src_zspage;
> @@ -2317,7 +2316,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>         spin_unlock(&class->lock);
>
>         pool->stats.pages_compacted += cc.nr_freed_pages;
> -       trace_zs_compact(class->index, cc.nr_scanned_obj, cc.nr_freed_pages);
> +       trace_zs_compact(class->index, cc.nr_migrated_obj, cc.nr_freed_pages);
>  }
>
>  unsigned long zs_compact(struct zs_pool *pool)
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-07-07  9:09 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-06  6:23 [PATCH v3 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
2016-07-06  6:23 ` [PATCH v3 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
2016-07-06  6:23 ` [PATCH v3 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
2016-07-06  6:23 ` [PATCH v3 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
2016-07-06  6:23 ` [PATCH v3 5/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
2016-07-06  6:23 ` [PATCH v3 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
2016-07-06  8:01   ` kbuild test robot
2016-07-06  8:06   ` kbuild test robot
2016-07-06  8:20   ` Ganesh Mahendran
2016-07-06  6:23 ` [PATCH v3 7/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
2016-07-06  6:23 ` [PATCH v3 8/8] mm/zsmalloc: add per-class compact trace event Ganesh Mahendran
2016-07-07  7:44   ` Minchan Kim
2016-07-07  9:08     ` Ganesh Mahendran

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).