linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others
@ 2016-07-07  9:05 Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

This is a cleanup patch. Change "index" to "obj_index" to keep
consistent with others in zsmalloc.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v4: none
v3: none
v2: none
---
 mm/zsmalloc.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e425de4..3a37977 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1779,7 +1779,7 @@ struct zs_compact_control {
 	struct page *d_page;
 	 /* Starting object index within @s_page which used for live object
 	  * in the subpage. */
-	int index;
+	int obj_idx;
 };
 
 static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
@@ -1789,16 +1789,16 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	unsigned long handle;
 	struct page *s_page = cc->s_page;
 	struct page *d_page = cc->d_page;
-	unsigned long index = cc->index;
+	int obj_idx = cc->obj_idx;
 	int ret = 0;
 
 	while (1) {
-		handle = find_alloced_obj(class, s_page, index);
+		handle = find_alloced_obj(class, s_page, obj_idx);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
 				break;
-			index = 0;
+			obj_idx = 0;
 			continue;
 		}
 
@@ -1812,7 +1812,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		used_obj = handle_to_obj(handle);
 		free_obj = obj_malloc(class, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
-		index++;
+		obj_idx++;
 		/*
 		 * record_obj updates handle's value to free_obj and it will
 		 * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
@@ -1827,7 +1827,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 
 	/* Remember last position in this iteration */
 	cc->s_page = s_page;
-	cc->index = index;
+	cc->obj_idx = obj_idx;
 
 	return ret;
 }
@@ -2282,7 +2282,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		if (!zs_can_compact(class))
 			break;
 
-		cc.index = 0;
+		cc.obj_idx = 0;
 		cc.s_page = get_first_page(src_zspage);
 
 		while ((dst_zspage = isolate_zspage(class, false))) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 2/8] mm/zsmalloc: take obj index back from find_alloced_obj
  2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
@ 2016-07-07  9:05 ` Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

the obj index value should be updated after return from
find_alloced_obj() to avoid CPU burning caused by unnecessary
object scanning.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v4: none
v3: none
v2:
  - update commit description
---
 mm/zsmalloc.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3a37977..1f144f1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1744,10 +1744,11 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
  * return handle.
  */
 static unsigned long find_alloced_obj(struct size_class *class,
-					struct page *page, int index)
+					struct page *page, int *obj_idx)
 {
 	unsigned long head;
 	int offset = 0;
+	int index = *obj_idx;
 	unsigned long handle = 0;
 	void *addr = kmap_atomic(page);
 
@@ -1768,6 +1769,9 @@ static unsigned long find_alloced_obj(struct size_class *class,
 	}
 
 	kunmap_atomic(addr);
+
+	*obj_idx = index;
+
 	return handle;
 }
 
@@ -1793,7 +1797,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	int ret = 0;
 
 	while (1) {
-		handle = find_alloced_obj(class, s_page, obj_idx);
+		handle = find_alloced_obj(class, s_page, &obj_idx);
 		if (!handle) {
 			s_page = get_next_page(s_page);
 			if (!s_page)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects
  2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
@ 2016-07-07  9:05 ` Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

num of max objects in zspage is stored in each size_class now.
So there is no need to re-calculate it.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
---
 mm/zsmalloc.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1f144f1..82ff2c0 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -638,8 +638,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v)
 		freeable = zs_can_compact(class);
 		spin_unlock(&class->lock);
 
-		objs_per_zspage = get_maxobj_per_zspage(class->size,
-				class->pages_per_zspage);
+		objs_per_zspage = class->objs_per_zspage;
 		pages_used = obj_allocated / objs_per_zspage *
 				class->pages_per_zspage;
 
@@ -1017,8 +1016,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class,
 
 	cache_free_zspage(pool, zspage);
 
-	zs_stat_dec(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
+	zs_stat_dec(class, OBJ_ALLOCATED, class->objs_per_zspage);
 	atomic_long_sub(class->pages_per_zspage,
 					&pool->pages_allocated);
 }
@@ -1369,7 +1367,7 @@ static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
 	if (prev->pages_per_zspage != pages_per_zspage)
 		return false;
 
-	if (get_maxobj_per_zspage(prev->size, prev->pages_per_zspage)
+	if (prev->objs_per_zspage
 		!= get_maxobj_per_zspage(size, pages_per_zspage))
 		return false;
 
@@ -1595,8 +1593,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp)
 	record_obj(handle, obj);
 	atomic_long_add(class->pages_per_zspage,
 				&pool->pages_allocated);
-	zs_stat_inc(class, OBJ_ALLOCATED, get_maxobj_per_zspage(
-			class->size, class->pages_per_zspage));
+	zs_stat_inc(class, OBJ_ALLOCATED, class->objs_per_zspage);
 
 	/* We completely set up zspage so mark them as movable */
 	SetZsPageMovable(pool, zspage);
@@ -2268,8 +2265,7 @@ static unsigned long zs_can_compact(struct size_class *class)
 		return 0;
 
 	obj_wasted = obj_allocated - obj_used;
-	obj_wasted /= get_maxobj_per_zspage(class->size,
-			class->pages_per_zspage);
+	obj_wasted /= class->objs_per_zspage;
 
 	return obj_wasted * class->pages_per_zspage;
 }
@@ -2483,8 +2479,8 @@ struct zs_pool *zs_create_pool(const char *name)
 		class->size = size;
 		class->index = i;
 		class->pages_per_zspage = pages_per_zspage;
-		class->objs_per_zspage = class->pages_per_zspage *
-						PAGE_SIZE / class->size;
+		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
+							class->pages_per_zspage);
 		spin_lock_init(&class->lock);
 		pool->size_class[i] = class;
 		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice
  2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
@ 2016-07-07  9:05 ` Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 5/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

Currently, if a class can not be merged, the max objects of zspage
in that class may be calculated twice.

This patch calculate max objects of zspage at the begin, and pass
the value to can_merge() to decide whether the class can be merged.

Also this patch remove function get_maxobj_per_zspage(), as there
is no other place to call this function.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v4: none
v3: none
v2:
    remove get_maxobj_per_zspage()  - Minchan
---
 mm/zsmalloc.c | 26 ++++++++++----------------
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 82ff2c0..82b9977 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -470,11 +470,6 @@ static struct zpool_driver zs_zpool_driver = {
 MODULE_ALIAS("zpool-zsmalloc");
 #endif /* CONFIG_ZPOOL */
 
-static unsigned int get_maxobj_per_zspage(int size, int pages_per_zspage)
-{
-	return pages_per_zspage * PAGE_SIZE / size;
-}
-
 /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
 static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
 
@@ -1362,16 +1357,14 @@ static void init_zs_size_classes(void)
 	zs_size_classes = nr;
 }
 
-static bool can_merge(struct size_class *prev, int size, int pages_per_zspage)
+static bool can_merge(struct size_class *prev, int pages_per_zspage,
+					int objs_per_zspage)
 {
-	if (prev->pages_per_zspage != pages_per_zspage)
-		return false;
+	if (prev->pages_per_zspage == pages_per_zspage &&
+		prev->objs_per_zspage == objs_per_zspage)
+		return true;
 
-	if (prev->objs_per_zspage
-		!= get_maxobj_per_zspage(size, pages_per_zspage))
-		return false;
-
-	return true;
+	return false;
 }
 
 static bool zspage_full(struct size_class *class, struct zspage *zspage)
@@ -2448,6 +2441,7 @@ struct zs_pool *zs_create_pool(const char *name)
 	for (i = zs_size_classes - 1; i >= 0; i--) {
 		int size;
 		int pages_per_zspage;
+		int objs_per_zspage;
 		struct size_class *class;
 		int fullness = 0;
 
@@ -2455,6 +2449,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		if (size > ZS_MAX_ALLOC_SIZE)
 			size = ZS_MAX_ALLOC_SIZE;
 		pages_per_zspage = get_pages_per_zspage(size);
+		objs_per_zspage = pages_per_zspage * PAGE_SIZE / size;
 
 		/*
 		 * size_class is used for normal zsmalloc operation such
@@ -2466,7 +2461,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		 * previous size_class if possible.
 		 */
 		if (prev_class) {
-			if (can_merge(prev_class, size, pages_per_zspage)) {
+			if (can_merge(prev_class, pages_per_zspage, objs_per_zspage)) {
 				pool->size_class[i] = prev_class;
 				continue;
 			}
@@ -2479,8 +2474,7 @@ struct zs_pool *zs_create_pool(const char *name)
 		class->size = size;
 		class->index = i;
 		class->pages_per_zspage = pages_per_zspage;
-		class->objs_per_zspage = get_maxobj_per_zspage(class->size,
-							class->pages_per_zspage);
+		class->objs_per_zspage = objs_per_zspage;
 		spin_lock_init(&class->lock);
 		pool->size_class[i] = class;
 		for (fullness = ZS_EMPTY; fullness < NR_ZS_FULLNESS;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 5/8] mm/zsmalloc: keep comments consistent with code
  2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (2 preceding siblings ...)
  2016-07-07  9:05 ` [PATCH v4 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
@ 2016-07-07  9:05 ` Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

some minor change of comments:
1). update zs_malloc(),zs_create_pool() function header
2). update "Usage of struct page fields"

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
----
v4: none
v3: none
v2:
    change *object index* to *object offset* - Minchan
---
 mm/zsmalloc.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 82b9977..ded312b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -20,6 +20,7 @@
  *	page->freelist(index): links together all component pages of a zspage
  *		For the huge page, this is always 0, so we use this field
  *		to store handle.
+ *	page->units: first object offset in a subpage of zspage
  *
  * Usage of struct page flags:
  *	PG_private: identifies the first component page
@@ -140,9 +141,6 @@
  */
 #define ZS_SIZE_CLASS_DELTA	(PAGE_SIZE >> CLASS_BITS)
 
-/*
- * We do not maintain any list for completely empty or full pages
- */
 enum fullness_group {
 	ZS_EMPTY,
 	ZS_ALMOST_EMPTY,
@@ -1535,6 +1533,7 @@ static unsigned long obj_malloc(struct size_class *class,
  * zs_malloc - Allocate block of given size from pool.
  * @pool: pool to allocate from
  * @size: size of block to allocate
+ * @gfp: gfp flags when allocating object
  *
  * On success, handle to the allocated object is returned,
  * otherwise 0.
@@ -2401,7 +2400,7 @@ static int zs_register_shrinker(struct zs_pool *pool)
 
 /**
  * zs_create_pool - Creates an allocation pool to work from.
- * @flags: allocation flags used to allocate pool metadata
+ * @name: pool name to be created
  *
  * This function must be called before anything when using
  * the zsmalloc allocator.
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 6/8] mm/zsmalloc: add __init,__exit attribute
  2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (3 preceding siblings ...)
  2016-07-07  9:05 ` [PATCH v4 5/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
@ 2016-07-07  9:05 ` Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 7/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 8/8] mm/zsmalloc: add per-class compact trace event Ganesh Mahendran
  6 siblings, 0 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

Add __init,__exit attribute for function that only called in
module init/exit to save memory.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
----
v4:
    remove __init/__exit from zsmalloc_mount/zsmalloc_umount
v3:
    revert change in v2 - Sergey
v2:
    add __init/__exit for zs_register_cpu_notifier/zs_unregister_cpu_notifier
---
 mm/zsmalloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ded312b..780eabd 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1344,7 +1344,7 @@ static void zs_unregister_cpu_notifier(void)
 	cpu_notifier_register_done();
 }
 
-static void init_zs_size_classes(void)
+static void __init init_zs_size_classes(void)
 {
 	int nr;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 7/8] mm/zsmalloc: use helper to clear page->flags bit
  2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (4 preceding siblings ...)
  2016-07-07  9:05 ` [PATCH v4 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
@ 2016-07-07  9:05 ` Ganesh Mahendran
  2016-07-07  9:05 ` [PATCH v4 8/8] mm/zsmalloc: add per-class compact trace event Ganesh Mahendran
  6 siblings, 0 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

user ClearPagePrivate/ClearPagePrivate2 helper to clear
PG_private/PG_private_2 in page->flags

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
----
v4: none
v3: none
v2: none
---
 mm/zsmalloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 780eabd..163bc90 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -940,8 +940,8 @@ static void unpin_tag(unsigned long handle)
 static void reset_page(struct page *page)
 {
 	__ClearPageMovable(page);
-	clear_bit(PG_private, &page->flags);
-	clear_bit(PG_private_2, &page->flags);
+	ClearPagePrivate(page);
+	ClearPagePrivate2(page);
 	set_page_private(page, 0);
 	page_mapcount_reset(page);
 	ClearPageHugeObject(page);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v4 8/8] mm/zsmalloc: add per-class compact trace event
  2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
                   ` (5 preceding siblings ...)
  2016-07-07  9:05 ` [PATCH v4 7/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
@ 2016-07-07  9:05 ` Ganesh Mahendran
  6 siblings, 0 replies; 8+ messages in thread
From: Ganesh Mahendran @ 2016-07-07  9:05 UTC (permalink / raw)
  To: linux-kernel, linux-mm
  Cc: akpm, minchan, ngupta, sergey.senozhatsky.work, mingo, rostedt,
	Ganesh Mahendran

add per-class compact trace event to get number of migrated objects
and number of freed pages.

trace log is like below:
----
            bash-3863  [001] ....   141.791366: zs_compact_start: pool zram0
            bash-3863  [001] ....   141.791372: zs_compact: class 254: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791375: zs_compact: class 202: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791385: zs_compact: class 190: 1 objects migrated, 3 pages freed
            bash-3863  [001] ....   141.791393: zs_compact: class 168: 2 objects migrated, 2 pages freed
            bash-3863  [001] ....   141.791396: zs_compact: class 151: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791407: zs_compact: class 144: 5 objects migrated, 4 pages freed
            bash-3863  [001] ....   141.791427: zs_compact: class 126: 8 objects migrated, 8 pages freed
            bash-3863  [001] ....   141.791433: zs_compact: class 111: 1 objects migrated, 4 pages freed
            bash-3863  [001] ....   141.791459: zs_compact: class 107: 18 objects migrated, 12 pages freed
            bash-3863  [001] ....   141.791487: zs_compact: class 100: 18 objects migrated, 16 pages freed
            bash-3863  [001] ....   141.791509: zs_compact: class  94: 18 objects migrated, 9 pages freed
            bash-3863  [001] ....   141.791560: zs_compact: class  91: 44 objects migrated, 24 pages freed
            bash-3863  [001] ....   141.791605: zs_compact: class  83: 35 objects migrated, 20 pages freed
            bash-3863  [001] ....   141.791616: zs_compact: class  76: 8 objects migrated, 4 pages freed
            bash-3863  [001] ....   141.791644: zs_compact: class  74: 21 objects migrated, 9 pages freed
            bash-3863  [001] ....   141.791665: zs_compact: class  71: 18 objects migrated, 10 pages freed
            bash-3863  [001] ....   141.791736: zs_compact: class  67: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791763: zs_compact: class  66: 22 objects migrated, 8 pages freed
            bash-3863  [001] ....   141.791820: zs_compact: class  62: 18 objects migrated, 6 pages freed
            bash-3863  [001] ....   141.791826: zs_compact: class  58: 1 objects migrated, 4 pages freed
            bash-3863  [001] ....   141.791829: zs_compact: class  57: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791834: zs_compact: class  54: 2 objects migrated, 2 pages freed
...
            bash-3863  [001] ....   141.791952: zs_compact: class   4: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791964: zs_compact: class   3: 14 objects migrated, 1 pages freed
            bash-3863  [001] ....   141.791966: zs_compact: class   2: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791968: zs_compact: class   1: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791971: zs_compact: class   0: 0 objects migrated, 0 pages freed
            bash-3863  [001] ....   141.791973: zs_compact_end: pool zram0: 155 pages compacted
----

Also this patch changes trace_zsmalloc_compact_start[end] to
trace_zs_compact_start[end] to keep function naming consistent
with others in zsmalloc.

Signed-off-by: Ganesh Mahendran <opensource.ganesh@gmail.com>
----
v4:
    show number of migrated object rather than the number of scanning object
v3:
    add per-class compact trace event - Minchan

    I put this patch from 1/8 to 8/8, since this patch depends on below patch:
       mm/zsmalloc: use obj_index to keep consistent with others
       mm/zsmalloc: take obj index back from find_alloced_obj

v2:
    update commit description
---
 include/trace/events/zsmalloc.h | 40 ++++++++++++++++++++++++++++++----------
 mm/zsmalloc.c                   | 25 +++++++++++++++++--------
 2 files changed, 47 insertions(+), 18 deletions(-)

diff --git a/include/trace/events/zsmalloc.h b/include/trace/events/zsmalloc.h
index 3b6f14e..772cf65 100644
--- a/include/trace/events/zsmalloc.h
+++ b/include/trace/events/zsmalloc.h
@@ -7,7 +7,7 @@
 #include <linux/types.h>
 #include <linux/tracepoint.h>
 
-TRACE_EVENT(zsmalloc_compact_start,
+TRACE_EVENT(zs_compact_start,
 
 	TP_PROTO(const char *pool_name),
 
@@ -25,29 +25,49 @@ TRACE_EVENT(zsmalloc_compact_start,
 		  __entry->pool_name)
 );
 
-TRACE_EVENT(zsmalloc_compact_end,
+TRACE_EVENT(zs_compact_end,
 
-	TP_PROTO(const char *pool_name, unsigned long pages_compacted,
-			unsigned long pages_total_compacted),
+	TP_PROTO(const char *pool_name, unsigned long pages_compacted),
 
-	TP_ARGS(pool_name, pages_compacted, pages_total_compacted),
+	TP_ARGS(pool_name, pages_compacted),
 
 	TP_STRUCT__entry(
 		__field(const char *, pool_name)
 		__field(unsigned long, pages_compacted)
-		__field(unsigned long, pages_total_compacted)
 	),
 
 	TP_fast_assign(
 		__entry->pool_name = pool_name;
 		__entry->pages_compacted = pages_compacted;
-		__entry->pages_total_compacted = pages_total_compacted;
 	),
 
-	TP_printk("pool %s: %ld pages compacted(total %ld)",
+	TP_printk("pool %s: %ld pages compacted",
 		  __entry->pool_name,
-		  __entry->pages_compacted,
-		  __entry->pages_total_compacted)
+		  __entry->pages_compacted)
+);
+
+TRACE_EVENT(zs_compact,
+
+	TP_PROTO(int class, unsigned long nr_migrated_obj, unsigned long nr_freed_pages),
+
+	TP_ARGS(class, nr_migrated_obj, nr_freed_pages),
+
+	TP_STRUCT__entry(
+		__field(int, class)
+		__field(unsigned long, nr_migrated_obj)
+		__field(unsigned long, nr_freed_pages)
+	),
+
+	TP_fast_assign(
+		__entry->class = class;
+		__entry->nr_migrated_obj = nr_migrated_obj;
+		__entry->nr_freed_pages = nr_freed_pages;
+	),
+
+	TP_printk("class %3d: %ld objects migrated, %ld pages freed",
+		  __entry->class,
+		  __entry->nr_migrated_obj,
+		  __entry->nr_freed_pages)
 );
 
 #endif /* _TRACE_ZSMALLOC_H */
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 163bc90..5e5237c 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1770,9 +1770,12 @@ struct zs_compact_control {
 	/* Destination page for migration which should be a first page
 	 * of zspage. */
 	struct page *d_page;
-	 /* Starting object index within @s_page which used for live object
-	  * in the subpage. */
+	/* Starting object index within @s_page which used for live object
+	 * in the subpage. */
 	int obj_idx;
+
+	unsigned long nr_migrated_obj;
+	unsigned long nr_freed_pages;
 };
 
 static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
@@ -1806,6 +1809,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		free_obj = obj_malloc(class, get_zspage(d_page), handle);
 		zs_object_copy(class, free_obj, used_obj);
 		obj_idx++;
+		cc->nr_migrated_obj++;
 		/*
 		 * record_obj updates handle's value to free_obj and it will
 		 * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which
@@ -2264,7 +2268,10 @@ static unsigned long zs_can_compact(struct size_class *class)
 
 static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 {
-	struct zs_compact_control cc;
+	struct zs_compact_control cc = {
+		.nr_migrated_obj = 0,
+		.nr_freed_pages = 0,
+	};
 	struct zspage *src_zspage;
 	struct zspage *dst_zspage = NULL;
 
@@ -2296,7 +2303,7 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		putback_zspage(class, dst_zspage);
 		if (putback_zspage(class, src_zspage) == ZS_EMPTY) {
 			free_zspage(pool, class, src_zspage);
-			pool->stats.pages_compacted += class->pages_per_zspage;
+			cc.nr_freed_pages += class->pages_per_zspage;
 		}
 		spin_unlock(&class->lock);
 		cond_resched();
@@ -2307,6 +2314,9 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 		putback_zspage(class, src_zspage);
 
 	spin_unlock(&class->lock);
+
+	pool->stats.pages_compacted += cc.nr_freed_pages;
+	trace_zs_compact(class->index, cc.nr_migrated_obj, cc.nr_freed_pages);
 }
 
 unsigned long zs_compact(struct zs_pool *pool)
@@ -2315,7 +2325,7 @@ unsigned long zs_compact(struct zs_pool *pool)
 	struct size_class *class;
 	unsigned long pages_compacted_before = pool->stats.pages_compacted;
 
-	trace_zsmalloc_compact_start(pool->name);
+	trace_zs_compact_start(pool->name);
 
 	for (i = zs_size_classes - 1; i >= 0; i--) {
 		class = pool->size_class[i];
@@ -2326,9 +2336,8 @@ unsigned long zs_compact(struct zs_pool *pool)
 		__zs_compact(pool, class);
 	}
 
-	trace_zsmalloc_compact_end(pool->name,
-		pool->stats.pages_compacted - pages_compacted_before,
-		pool->stats.pages_compacted);
+	trace_zs_compact_end(pool->name,
+		pool->stats.pages_compacted - pages_compacted_before);
 
 	return pool->stats.pages_compacted;
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-07-07  9:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-07  9:05 [PATCH v4 1/8] mm/zsmalloc: use obj_index to keep consistent with others Ganesh Mahendran
2016-07-07  9:05 ` [PATCH v4 2/8] mm/zsmalloc: take obj index back from find_alloced_obj Ganesh Mahendran
2016-07-07  9:05 ` [PATCH v4 3/8] mm/zsmalloc: use class->objs_per_zspage to get num of max objects Ganesh Mahendran
2016-07-07  9:05 ` [PATCH v4 4/8] mm/zsmalloc: avoid calculate max objects of zspage twice Ganesh Mahendran
2016-07-07  9:05 ` [PATCH v4 5/8] mm/zsmalloc: keep comments consistent with code Ganesh Mahendran
2016-07-07  9:05 ` [PATCH v4 6/8] mm/zsmalloc: add __init,__exit attribute Ganesh Mahendran
2016-07-07  9:05 ` [PATCH v4 7/8] mm/zsmalloc: use helper to clear page->flags bit Ganesh Mahendran
2016-07-07  9:05 ` [PATCH v4 8/8] mm/zsmalloc: add per-class compact trace event Ganesh Mahendran

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).