linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction
@ 2015-07-07 11:56 Sergey Senozhatsky
  2015-07-07 11:56 ` [PATCH v6 1/7] zsmalloc: drop unused variable `nr_to_migrate' Sergey Senozhatsky
                   ` (6 more replies)
  0 siblings, 7 replies; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

Hello,

This patch set tweaks compaction and makes it possible to trigger
pool compaction automatically when system is getting low on memory.

zsmalloc in some cases can suffer from a notable fragmentation and
compaction can release some considerable amount of memory. The problem
here is that currently we fully rely on user space to perform compaction
when needed. However, performing zsmalloc compaction is not always an
obvious thing to do. For example, suppose we have a `idle' fragmented
(compaction was never performed) zram device and system is getting low
on memory due to some 3rd party user processes (gcc LTO, or firefox, etc.).
It's quite unlikely that user space will issue zpool compaction in this
case. Besides, user space cannot tell for sure how badly pool is
fragmented; however, this info is known to zsmalloc and, hence, to a
shrinker.

v6:
-- use new zs_pool stats API (Minchan)

v5:
-- account freed pages correctly

v4: address review notes (Minchan)
-- do not abort __zs_compact() quickly (Minchan)
-- switch zsmalloc compaction to operate in terms of freed pages
-- micro-optimize zs_can_compact() (Minchan)

v3:
-- drop almost_empty waterline adjustment patch (Minchan)
-- do not hold class->lock for the entire compaction period (Minchan)

v2:
-- use a slab shrinker instead of triggering compaction from zs_free (Minchan)

Sergey Senozhatsky (7):
  zsmalloc: drop unused variable `nr_to_migrate'
  zsmalloc: always keep per-class stats
  zsmalloc: introduce zs_can_compact() function
  zsmalloc: cosmetic compaction code adjustments
  zsmalloc/zram: introduce zs_pool_stats api
  zsmalloc: account the number of compacted pages
  zsmalloc: use shrinker to trigger auto-compaction

 Documentation/blockdev/zram.txt |   3 +-
 drivers/block/zram/zram_drv.c   |  11 +-
 drivers/block/zram/zram_drv.h   |   1 -
 include/linux/zsmalloc.h        |   6 ++
 mm/zsmalloc.c                   | 220 +++++++++++++++++++++++++++-------------
 5 files changed, 164 insertions(+), 77 deletions(-)

-- 
2.4.5


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v6 1/7] zsmalloc: drop unused variable `nr_to_migrate'
  2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
@ 2015-07-07 11:56 ` Sergey Senozhatsky
  2015-07-07 11:56 ` [PATCH v6 2/7] zsmalloc: always keep per-class stats Sergey Senozhatsky
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

__zs_compact() does not use `nr_to_migrate', drop it.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 3538b8c..2aecdb3 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1712,7 +1712,6 @@ static struct page *isolate_source_page(struct size_class *class)
 static unsigned long __zs_compact(struct zs_pool *pool,
 				struct size_class *class)
 {
-	int nr_to_migrate;
 	struct zs_compact_control cc;
 	struct page *src_page;
 	struct page *dst_page = NULL;
@@ -1723,8 +1722,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 
 		BUG_ON(!is_first_page(src_page));
 
-		/* The goal is to migrate all live objects in source page */
-		nr_to_migrate = src_page->inuse;
 		cc.index = 0;
 		cc.s_page = src_page;
 
@@ -1739,7 +1736,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 
 			putback_zspage(pool, class, dst_page);
 			nr_total_migrated += cc.nr_migrated;
-			nr_to_migrate -= cc.nr_migrated;
 		}
 
 		/* Stop if we couldn't find slot */
-- 
2.4.5


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 2/7] zsmalloc: always keep per-class stats
  2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
  2015-07-07 11:56 ` [PATCH v6 1/7] zsmalloc: drop unused variable `nr_to_migrate' Sergey Senozhatsky
@ 2015-07-07 11:56 ` Sergey Senozhatsky
  2015-07-07 11:56 ` [PATCH v6 3/7] zsmalloc: introduce zs_can_compact() function Sergey Senozhatsky
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

Always account per-class `zs_size_stat' stats. This data will
help us make better decisions during compaction. We are especially
interested in OBJ_ALLOCATED and OBJ_USED, which can tell us if
class compaction will result in any memory gain.

For instance, we know the number of allocated objects in the class,
the number of objects being used (so we also know how many objects
are not used) and the number of objects per-page. So we can ensure
if we have enough unused objects to form at least one ZS_EMPTY
zspage during compaction.

We calculate this value on per-class basis so we can calculate a
total number of zspages that can be released. Which is exactly what
a shrinker wants to know.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 48 ++++++++++++------------------------------------
 1 file changed, 12 insertions(+), 36 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 2aecdb3..036baa8 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -169,14 +169,12 @@ enum zs_stat_type {
 	NR_ZS_STAT_TYPE,
 };
 
-#ifdef CONFIG_ZSMALLOC_STAT
-
-static struct dentry *zs_stat_root;
-
 struct zs_size_stat {
 	unsigned long objs[NR_ZS_STAT_TYPE];
 };
 
+#ifdef CONFIG_ZSMALLOC_STAT
+static struct dentry *zs_stat_root;
 #endif
 
 /*
@@ -201,25 +199,21 @@ static int zs_size_classes;
 static const int fullness_threshold_frac = 4;
 
 struct size_class {
+	spinlock_t		lock;
+	struct page		*fullness_list[_ZS_NR_FULLNESS_GROUPS];
 	/*
 	 * Size of objects stored in this class. Must be multiple
 	 * of ZS_ALIGN.
 	 */
-	int size;
-	unsigned int index;
+	int			size;
+	unsigned int		index;
 
 	/* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */
-	int pages_per_zspage;
-	/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
-	bool huge;
-
-#ifdef CONFIG_ZSMALLOC_STAT
-	struct zs_size_stat stats;
-#endif
-
-	spinlock_t lock;
+	int			pages_per_zspage;
+	struct zs_size_stat	stats;
 
-	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
+	/* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */
+	bool			huge;
 };
 
 /*
@@ -441,8 +435,6 @@ static int get_size_class_index(int size)
 	return min(zs_size_classes - 1, idx);
 }
 
-#ifdef CONFIG_ZSMALLOC_STAT
-
 static inline void zs_stat_inc(struct size_class *class,
 				enum zs_stat_type type, unsigned long cnt)
 {
@@ -461,6 +453,8 @@ static inline unsigned long zs_stat_get(struct size_class *class,
 	return class->stats.objs[type];
 }
 
+#ifdef CONFIG_ZSMALLOC_STAT
+
 static int __init zs_stat_init(void)
 {
 	if (!debugfs_initialized())
@@ -576,23 +570,6 @@ static void zs_pool_stat_destroy(struct zs_pool *pool)
 }
 
 #else /* CONFIG_ZSMALLOC_STAT */
-
-static inline void zs_stat_inc(struct size_class *class,
-				enum zs_stat_type type, unsigned long cnt)
-{
-}
-
-static inline void zs_stat_dec(struct size_class *class,
-				enum zs_stat_type type, unsigned long cnt)
-{
-}
-
-static inline unsigned long zs_stat_get(struct size_class *class,
-				enum zs_stat_type type)
-{
-	return 0;
-}
-
 static int __init zs_stat_init(void)
 {
 	return 0;
@@ -610,7 +587,6 @@ static inline int zs_pool_stat_create(char *name, struct zs_pool *pool)
 static inline void zs_pool_stat_destroy(struct zs_pool *pool)
 {
 }
-
 #endif
 
 
-- 
2.4.5


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 3/7] zsmalloc: introduce zs_can_compact() function
  2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
  2015-07-07 11:56 ` [PATCH v6 1/7] zsmalloc: drop unused variable `nr_to_migrate' Sergey Senozhatsky
  2015-07-07 11:56 ` [PATCH v6 2/7] zsmalloc: always keep per-class stats Sergey Senozhatsky
@ 2015-07-07 11:56 ` Sergey Senozhatsky
  2015-07-07 13:21   ` Minchan Kim
  2015-07-07 11:56 ` [PATCH v6 4/7] zsmalloc: cosmetic compaction code adjustments Sergey Senozhatsky
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

This function checks if class compaction will free any pages.
Rephrasing -- do we have enough unused objects to form at least
one ZS_EMPTY page and free it. It aborts compaction if class
compaction will not result in any (further) savings.

EXAMPLE (this debug output is not part of this patch set):

-- class size
-- number of allocated objects
-- number of used objects
-- max objects per zspage
-- pages per zspage
-- estimated number of pages that will be freed

[..]
class-512 objs:544 inuse:540 maxobj-per-zspage:8  pages-per-zspage:1 zspages-to-free:0
 ... class-512 compaction is useless. break
class-496 objs:660 inuse:570 maxobj-per-zspage:33 pages-per-zspage:4 zspages-to-free:2
class-496 objs:627 inuse:570 maxobj-per-zspage:33 pages-per-zspage:4 zspages-to-free:1
class-496 objs:594 inuse:570 maxobj-per-zspage:33 pages-per-zspage:4 zspages-to-free:0
 ... class-496 compaction is useless. break
class-448 objs:657 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:4
class-448 objs:648 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:3
class-448 objs:639 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:2
class-448 objs:630 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:1
class-448 objs:621 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:0
 ... class-448 compaction is useless. break
class-432 objs:728 inuse:685 maxobj-per-zspage:28 pages-per-zspage:3 zspages-to-free:1
class-432 objs:700 inuse:685 maxobj-per-zspage:28 pages-per-zspage:3 zspages-to-free:0
 ... class-432 compaction is useless. break
class-416 objs:819 inuse:705 maxobj-per-zspage:39 pages-per-zspage:4 zspages-to-free:2
class-416 objs:780 inuse:705 maxobj-per-zspage:39 pages-per-zspage:4 zspages-to-free:1
class-416 objs:741 inuse:705 maxobj-per-zspage:39 pages-per-zspage:4 zspages-to-free:0
 ... class-416 compaction is useless. break
class-400 objs:690 inuse:674 maxobj-per-zspage:10 pages-per-zspage:1 zspages-to-free:1
class-400 objs:680 inuse:674 maxobj-per-zspage:10 pages-per-zspage:1 zspages-to-free:0
 ... class-400 compaction is useless. break
class-384 objs:736 inuse:709 maxobj-per-zspage:32 pages-per-zspage:3 zspages-to-free:0
 ... class-384 compaction is useless. break
[..]

Every "compaction is useless" indicates that we saved CPU cycles.

class-512 has
	544	object allocated
	540	objects used
	8	objects per-page

Even if we have a ALMOST_EMPTY zspage, we still don't have enough room to
migrate all of its objects and free this zspage; so compaction will not
make a lot of sense, it's better to just leave it as is.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
---
 mm/zsmalloc.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 036baa8..b7410c1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1685,6 +1685,28 @@ static struct page *isolate_source_page(struct size_class *class)
 	return page;
 }
 
+/*
+ *
+ * Based on the number of unused allocated objects calculate
+ * and return the number of pages that we can free.
+ *
+ * Should be called under class->lock.
+ */
+static unsigned long zs_can_compact(struct size_class *class)
+{
+	unsigned long obj_wasted;
+
+	if (!zs_stat_get(class, CLASS_ALMOST_EMPTY))
+		return 0;
+
+	obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) -
+		zs_stat_get(class, OBJ_USED);
+
+	obj_wasted /= get_maxobj_per_zspage(class->size,
+			class->pages_per_zspage);
+	return obj_wasted * get_pages_per_zspage(class->size);
+}
+
 static unsigned long __zs_compact(struct zs_pool *pool,
 				struct size_class *class)
 {
@@ -1698,6 +1720,9 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 
 		BUG_ON(!is_first_page(src_page));
 
+		if (!zs_can_compact(class))
+			break;
+
 		cc.index = 0;
 		cc.s_page = src_page;
 
-- 
2.4.5


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 4/7] zsmalloc: cosmetic compaction code adjustments
  2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
                   ` (2 preceding siblings ...)
  2015-07-07 11:56 ` [PATCH v6 3/7] zsmalloc: introduce zs_can_compact() function Sergey Senozhatsky
@ 2015-07-07 11:56 ` Sergey Senozhatsky
  2015-07-07 11:56 ` [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api Sergey Senozhatsky
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

Change zs_object_copy() argument order to be (DST, SRC) rather
than (SRC, DST). copy/move functions usually have (to, from)
arguments order.

Rename alloc_target_page() to isolate_target_page(). This
function doesn't allocate anything, it isolates target page,
pretty much like isolate_source_page().

Tweak __zs_compact() comment.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b7410c1..ce1484e 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1480,7 +1480,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_free);
 
-static void zs_object_copy(unsigned long src, unsigned long dst,
+static void zs_object_copy(unsigned long dst, unsigned long src,
 				struct size_class *class)
 {
 	struct page *s_page, *d_page;
@@ -1621,7 +1621,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 
 		used_obj = handle_to_obj(handle);
 		free_obj = obj_malloc(d_page, class, handle);
-		zs_object_copy(used_obj, free_obj, class);
+		zs_object_copy(free_obj, used_obj, class);
 		index++;
 		record_obj(handle, free_obj);
 		unpin_tag(handle);
@@ -1637,7 +1637,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	return ret;
 }
 
-static struct page *alloc_target_page(struct size_class *class)
+static struct page *isolate_target_page(struct size_class *class)
 {
 	int i;
 	struct page *page;
@@ -1726,11 +1726,11 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 		cc.index = 0;
 		cc.s_page = src_page;
 
-		while ((dst_page = alloc_target_page(class))) {
+		while ((dst_page = isolate_target_page(class))) {
 			cc.d_page = dst_page;
 			/*
-			 * If there is no more space in dst_page, try to
-			 * allocate another zspage.
+			 * If there is no more space in dst_page, resched
+			 * and see if anyone had allocated another zspage.
 			 */
 			if (!migrate_zspage(pool, class, &cc))
 				break;
-- 
2.4.5


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api
  2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
                   ` (3 preceding siblings ...)
  2015-07-07 11:56 ` [PATCH v6 4/7] zsmalloc: cosmetic compaction code adjustments Sergey Senozhatsky
@ 2015-07-07 11:56 ` Sergey Senozhatsky
  2015-07-07 13:36   ` Minchan Kim
  2015-07-07 11:57 ` [PATCH v6 6/7] zsmalloc: account the number of compacted pages Sergey Senozhatsky
  2015-07-07 11:57 ` [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction Sergey Senozhatsky
  6 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:56 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

`zs_compact_control' accounts the number of migrated objects but
it has a limited lifespan -- we lose it as soon as zs_compaction()
returns back to zram. It worked fine, because (a) zram had it's own
counter of migrated objects and (b) only zram could trigger
compaction. However, this does not work for automatic pool
compaction (not issued by zram). To account objects migrated
during auto-compaction (issued by the shrinker) we need to store
this number in zs_pool.

Define a new `struct zs_pool_stats' structure to keep zs_pool's
stats there. It provides only `num_migrated', as of this writing,
but it surely can be extended.

A new zsmalloc zs_pool_stats() symbol exports zs_pool's stats
back to caller.

Use zs_pool_stats() in zram and remove `num_migrated' from
zram_stats.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Suggested-by: Minchan Kim <minchan@kernel.org>
---
 drivers/block/zram/zram_drv.c | 11 ++++++-----
 drivers/block/zram/zram_drv.h |  1 -
 include/linux/zsmalloc.h      |  6 ++++++
 mm/zsmalloc.c                 | 42 ++++++++++++++++++++++--------------------
 4 files changed, 34 insertions(+), 26 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index fb655e8..aa22fe07 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -388,7 +388,6 @@ static ssize_t comp_algorithm_store(struct device *dev,
 static ssize_t compact_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
-	unsigned long nr_migrated;
 	struct zram *zram = dev_to_zram(dev);
 	struct zram_meta *meta;
 
@@ -399,8 +398,7 @@ static ssize_t compact_store(struct device *dev,
 	}
 
 	meta = zram->meta;
-	nr_migrated = zs_compact(meta->mem_pool);
-	atomic64_add(nr_migrated, &zram->stats.num_migrated);
+	zs_compact(meta->mem_pool);
 	up_read(&zram->init_lock);
 
 	return len;
@@ -428,13 +426,16 @@ static ssize_t mm_stat_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
 {
 	struct zram *zram = dev_to_zram(dev);
+	struct zs_pool_stats pool_stats = {0};
 	u64 orig_size, mem_used = 0;
 	long max_used;
 	ssize_t ret;
 
 	down_read(&zram->init_lock);
-	if (init_done(zram))
+	if (init_done(zram)) {
 		mem_used = zs_get_total_pages(zram->meta->mem_pool);
+		zs_pool_stats(zram->meta->mem_pool, &pool_stats);
+	}
 
 	orig_size = atomic64_read(&zram->stats.pages_stored);
 	max_used = atomic_long_read(&zram->stats.max_used_pages);
@@ -447,7 +448,7 @@ static ssize_t mm_stat_show(struct device *dev,
 			zram->limit_pages << PAGE_SHIFT,
 			max_used << PAGE_SHIFT,
 			(u64)atomic64_read(&zram->stats.zero_pages),
-			(u64)atomic64_read(&zram->stats.num_migrated));
+			pool_stats.num_migrated);
 	up_read(&zram->init_lock);
 
 	return ret;
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index 6dbe2df..8e92339 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -78,7 +78,6 @@ struct zram_stats {
 	atomic64_t compr_data_size;	/* compressed size of pages stored */
 	atomic64_t num_reads;	/* failed + successful */
 	atomic64_t num_writes;	/* --do-- */
-	atomic64_t num_migrated;	/* no. of migrated object */
 	atomic64_t failed_reads;	/* can happen when memory is too low */
 	atomic64_t failed_writes;	/* can happen when memory is too low */
 	atomic64_t invalid_io;	/* non-page-aligned I/O requests */
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index 1338190..9340fce 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -34,6 +34,11 @@ enum zs_mapmode {
 	 */
 };
 
+struct zs_pool_stats {
+	/* How many objects were migrated */
+	u64		num_migrated;
+};
+
 struct zs_pool;
 
 struct zs_pool *zs_create_pool(char *name, gfp_t flags);
@@ -49,4 +54,5 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
 unsigned long zs_get_total_pages(struct zs_pool *pool);
 unsigned long zs_compact(struct zs_pool *pool);
 
+void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats);
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ce1484e..db3cb2d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -237,16 +237,18 @@ struct link_free {
 };
 
 struct zs_pool {
-	char *name;
+	char			*name;
 
-	struct size_class **size_class;
-	struct kmem_cache *handle_cachep;
+	struct size_class	**size_class;
+	struct kmem_cache	*handle_cachep;
 
-	gfp_t flags;	/* allocation flags used when growing pool */
-	atomic_long_t pages_allocated;
+	/* Allocation flags used when growing pool */
+	gfp_t			flags;
+	atomic_long_t		pages_allocated;
 
+	struct zs_pool_stats	stats;
 #ifdef CONFIG_ZSMALLOC_STAT
-	struct dentry *stat_dentry;
+	struct dentry		*stat_dentry;
 #endif
 };
 
@@ -1587,7 +1589,7 @@ struct zs_compact_control {
 	 /* Starting object index within @s_page which used for live object
 	  * in the subpage. */
 	int index;
-	/* how many of objects are migrated */
+	/* How many of objects were migrated */
 	int nr_migrated;
 };
 
@@ -1599,7 +1601,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 	struct page *s_page = cc->s_page;
 	struct page *d_page = cc->d_page;
 	unsigned long index = cc->index;
-	int nr_migrated = 0;
 	int ret = 0;
 
 	while (1) {
@@ -1626,13 +1627,12 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		record_obj(handle, free_obj);
 		unpin_tag(handle);
 		obj_free(pool, class, used_obj);
-		nr_migrated++;
+		cc->nr_migrated++;
 	}
 
 	/* Remember last position in this iteration */
 	cc->s_page = s_page;
 	cc->index = index;
-	cc->nr_migrated = nr_migrated;
 
 	return ret;
 }
@@ -1707,14 +1707,13 @@ static unsigned long zs_can_compact(struct size_class *class)
 	return obj_wasted * get_pages_per_zspage(class->size);
 }
 
-static unsigned long __zs_compact(struct zs_pool *pool,
-				struct size_class *class)
+static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 {
 	struct zs_compact_control cc;
 	struct page *src_page;
 	struct page *dst_page = NULL;
-	unsigned long nr_total_migrated = 0;
 
+	cc.nr_migrated = 0;
 	spin_lock(&class->lock);
 	while ((src_page = isolate_source_page(class))) {
 
@@ -1736,7 +1735,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 				break;
 
 			putback_zspage(pool, class, dst_page);
-			nr_total_migrated += cc.nr_migrated;
 		}
 
 		/* Stop if we couldn't find slot */
@@ -1746,7 +1744,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 		putback_zspage(pool, class, dst_page);
 		putback_zspage(pool, class, src_page);
 		spin_unlock(&class->lock);
-		nr_total_migrated += cc.nr_migrated;
 		cond_resched();
 		spin_lock(&class->lock);
 	}
@@ -1754,15 +1751,14 @@ static unsigned long __zs_compact(struct zs_pool *pool,
 	if (src_page)
 		putback_zspage(pool, class, src_page);
 
-	spin_unlock(&class->lock);
+	pool->stats.num_migrated += cc.nr_migrated;
 
-	return nr_total_migrated;
+	spin_unlock(&class->lock);
 }
 
 unsigned long zs_compact(struct zs_pool *pool)
 {
 	int i;
-	unsigned long nr_migrated = 0;
 	struct size_class *class;
 
 	for (i = zs_size_classes - 1; i >= 0; i--) {
@@ -1771,13 +1767,19 @@ unsigned long zs_compact(struct zs_pool *pool)
 			continue;
 		if (class->index != i)
 			continue;
-		nr_migrated += __zs_compact(pool, class);
+		__zs_compact(pool, class);
 	}
 
-	return nr_migrated;
+	return pool->stats.num_migrated;
 }
 EXPORT_SYMBOL_GPL(zs_compact);
 
+void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats)
+{
+	memcpy(stats, &pool->stats, sizeof(struct zs_pool_stats));
+}
+EXPORT_SYMBOL_GPL(zs_pool_stats);
+
 /**
  * zs_create_pool - Creates an allocation pool to work from.
  * @flags: allocation flags used to allocate pool metadata
-- 
2.4.5


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 6/7] zsmalloc: account the number of compacted pages
  2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
                   ` (4 preceding siblings ...)
  2015-07-07 11:56 ` [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api Sergey Senozhatsky
@ 2015-07-07 11:57 ` Sergey Senozhatsky
  2015-07-07 13:39   ` Minchan Kim
  2015-07-07 11:57 ` [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction Sergey Senozhatsky
  6 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:57 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

Compaction returns back to zram the number of migrated objects,
which is quite uninformative -- we have objects of different
sizes so user space cannot obtain any valuable data from that
number. Change compaction to operate in terms of pages and
return back to compaction issuer the number of pages that
were freed during compaction. So from now on we will export
more meaningful value in zram<id>/mm_stat -- the number of freed
(compacted) pages.

This requires:
(a) a rename of `num_migrated' to 'pages_compacted'
(b) a internal API change -- return first_page's fullness_group
from putback_zspage(), so we know when putback_zspage() did
free_zspage(). It helps us to account compaction stats correctly.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
---
 Documentation/blockdev/zram.txt |  3 ++-
 drivers/block/zram/zram_drv.c   |  2 +-
 include/linux/zsmalloc.h        |  4 ++--
 mm/zsmalloc.c                   | 27 +++++++++++++++++----------
 4 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index c4de576..62435bb 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -144,7 +144,8 @@ mem_used_max      RW    the maximum amount memory zram have consumed to
                         store compressed data
 mem_limit         RW    the maximum amount of memory ZRAM can use to store
                         the compressed data
-num_migrated      RO    the number of objects migrated migrated by compaction
+pages_compacted   RO    the number of pages freed during compaction
+                        (available only via zram<id>/mm_stat node)
 compact           WO    trigger memory compaction
 
 WARNING
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index aa22fe07..1bcbc19 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -448,7 +448,7 @@ static ssize_t mm_stat_show(struct device *dev,
 			zram->limit_pages << PAGE_SHIFT,
 			max_used << PAGE_SHIFT,
 			(u64)atomic64_read(&zram->stats.zero_pages),
-			pool_stats.num_migrated);
+			pool_stats.pages_compacted);
 	up_read(&zram->init_lock);
 
 	return ret;
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index 9340fce..cda4ad4 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -35,8 +35,8 @@ enum zs_mapmode {
 };
 
 struct zs_pool_stats {
-	/* How many objects were migrated */
-	u64		num_migrated;
+	/* How many pages were migrated (freed) */
+	u64		pages_compacted;
 };
 
 struct zs_pool;
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index db3cb2d..13f2c4a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1589,8 +1589,6 @@ struct zs_compact_control {
 	 /* Starting object index within @s_page which used for live object
 	  * in the subpage. */
 	int index;
-	/* How many of objects were migrated */
-	int nr_migrated;
 };
 
 static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
@@ -1627,7 +1625,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
 		record_obj(handle, free_obj);
 		unpin_tag(handle);
 		obj_free(pool, class, used_obj);
-		cc->nr_migrated++;
 	}
 
 	/* Remember last position in this iteration */
@@ -1653,8 +1650,17 @@ static struct page *isolate_target_page(struct size_class *class)
 	return page;
 }
 
-static void putback_zspage(struct zs_pool *pool, struct size_class *class,
-				struct page *first_page)
+/*
+ * putback_zspage - add @first_page into right class's fullness list
+ * @pool: target pool
+ * @class: destination class
+ * @first_page: target page
+ *
+ * Return @fist_page's fullness_group
+ */
+static enum fullness_group putback_zspage(struct zs_pool *pool,
+			struct size_class *class,
+			struct page *first_page)
 {
 	enum fullness_group fullness;
 
@@ -1672,6 +1678,8 @@ static void putback_zspage(struct zs_pool *pool, struct size_class *class,
 
 		free_zspage(first_page);
 	}
+
+	return fullness;
 }
 
 static struct page *isolate_source_page(struct size_class *class)
@@ -1713,7 +1721,6 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 	struct page *src_page;
 	struct page *dst_page = NULL;
 
-	cc.nr_migrated = 0;
 	spin_lock(&class->lock);
 	while ((src_page = isolate_source_page(class))) {
 
@@ -1742,7 +1749,9 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 			break;
 
 		putback_zspage(pool, class, dst_page);
-		putback_zspage(pool, class, src_page);
+		if (putback_zspage(pool, class, src_page) == ZS_EMPTY)
+			pool->stats.pages_compacted +=
+				get_pages_per_zspage(class->size);
 		spin_unlock(&class->lock);
 		cond_resched();
 		spin_lock(&class->lock);
@@ -1751,8 +1760,6 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
 	if (src_page)
 		putback_zspage(pool, class, src_page);
 
-	pool->stats.num_migrated += cc.nr_migrated;
-
 	spin_unlock(&class->lock);
 }
 
@@ -1770,7 +1777,7 @@ unsigned long zs_compact(struct zs_pool *pool)
 		__zs_compact(pool, class);
 	}
 
-	return pool->stats.num_migrated;
+	return pool->stats.pages_compacted;
 }
 EXPORT_SYMBOL_GPL(zs_compact);
 
-- 
2.4.5


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
                   ` (5 preceding siblings ...)
  2015-07-07 11:57 ` [PATCH v6 6/7] zsmalloc: account the number of compacted pages Sergey Senozhatsky
@ 2015-07-07 11:57 ` Sergey Senozhatsky
  2015-07-07 13:44   ` Minchan Kim
  6 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 11:57 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

Perform automatic pool compaction by a shrinker when system
is getting tight on memory.

User-space has a very little knowledge regarding zsmalloc fragmentation
and basically has no mechanism to tell whether compaction will result
in any memory gain. Another issue is that user space is not always
aware of the fact that system is getting tight on memory. Which leads
to very uncomfortable scenarios when user space may start issuing
compaction 'randomly' or from crontab (for example). Fragmentation
is not always necessarily bad, allocated and unused objects, after all,
may be filled with the data later, w/o the need of allocating a new
zspage. On the other hand, we obviously don't want to waste memory
when the system needs it.

Compaction now has a relatively quick pool scan so we are able to
estimate the number of pages that will be freed easily, which makes it
possible to call this function from a shrinker->count_objects() callback.
We also abort compaction as soon as we detect that we can't free any
pages any more, preventing wasteful objects migrations.

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Suggested-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 13f2c4a..83b2e97 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -247,6 +247,10 @@ struct zs_pool {
 	atomic_long_t		pages_allocated;
 
 	struct zs_pool_stats	stats;
+
+	/* Compact classes */
+	struct shrinker		shrinker;
+	bool			shrinker_enabled;
 #ifdef CONFIG_ZSMALLOC_STAT
 	struct dentry		*stat_dentry;
 #endif
@@ -1787,6 +1791,69 @@ void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats)
 }
 EXPORT_SYMBOL_GPL(zs_pool_stats);
 
+static unsigned long zs_shrinker_scan(struct shrinker *shrinker,
+		struct shrink_control *sc)
+{
+	unsigned long pages_freed;
+	struct zs_pool *pool = container_of(shrinker, struct zs_pool,
+			shrinker);
+
+	pages_freed = pool->stats.pages_compacted;
+	/*
+	 * Compact classes and calculate compaction delta.
+	 * Can run concurrently with a manually triggered
+	 * (by user) compaction.
+	 */
+	pages_freed = zs_compact(pool) - pages_freed;
+
+	return pages_freed ? pages_freed : SHRINK_STOP;
+}
+
+static unsigned long zs_shrinker_count(struct shrinker *shrinker,
+		struct shrink_control *sc)
+{
+	int i;
+	struct size_class *class;
+	unsigned long pages_to_free = 0;
+	struct zs_pool *pool = container_of(shrinker, struct zs_pool,
+			shrinker);
+
+	if (!pool->shrinker_enabled)
+		return 0;
+
+	for (i = zs_size_classes - 1; i >= 0; i--) {
+		class = pool->size_class[i];
+		if (!class)
+			continue;
+		if (class->index != i)
+			continue;
+
+		spin_lock(&class->lock);
+		pages_to_free += zs_can_compact(class);
+		spin_unlock(&class->lock);
+	}
+
+	return pages_to_free;
+}
+
+static void zs_unregister_shrinker(struct zs_pool *pool)
+{
+	if (pool->shrinker_enabled) {
+		unregister_shrinker(&pool->shrinker);
+		pool->shrinker_enabled = false;
+	}
+}
+
+static int zs_register_shrinker(struct zs_pool *pool)
+{
+	pool->shrinker.scan_objects = zs_shrinker_scan;
+	pool->shrinker.count_objects = zs_shrinker_count;
+	pool->shrinker.batch = 0;
+	pool->shrinker.seeks = DEFAULT_SEEKS;
+
+	return register_shrinker(&pool->shrinker);
+}
+
 /**
  * zs_create_pool - Creates an allocation pool to work from.
  * @flags: allocation flags used to allocate pool metadata
@@ -1872,6 +1939,12 @@ struct zs_pool *zs_create_pool(char *name, gfp_t flags)
 	if (zs_pool_stat_create(name, pool))
 		goto err;
 
+	/*
+	 * Not critical, we still can use the pool
+	 * and user can trigger compaction manually.
+	 */
+	if (zs_register_shrinker(pool) == 0)
+		pool->shrinker_enabled = true;
 	return pool;
 
 err:
@@ -1884,6 +1957,7 @@ void zs_destroy_pool(struct zs_pool *pool)
 {
 	int i;
 
+	zs_unregister_shrinker(pool);
 	zs_pool_stat_destroy(pool);
 
 	for (i = 0; i < zs_size_classes; i++) {
-- 
2.4.5


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 3/7] zsmalloc: introduce zs_can_compact() function
  2015-07-07 11:56 ` [PATCH v6 3/7] zsmalloc: introduce zs_can_compact() function Sergey Senozhatsky
@ 2015-07-07 13:21   ` Minchan Kim
  0 siblings, 0 replies; 23+ messages in thread
From: Minchan Kim @ 2015-07-07 13:21 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

On Tue, Jul 07, 2015 at 08:56:57PM +0900, Sergey Senozhatsky wrote:
> This function checks if class compaction will free any pages.
> Rephrasing -- do we have enough unused objects to form at least
> one ZS_EMPTY page and free it. It aborts compaction if class
> compaction will not result in any (further) savings.
> 
> EXAMPLE (this debug output is not part of this patch set):
> 
> -- class size
> -- number of allocated objects
> -- number of used objects
> -- max objects per zspage
> -- pages per zspage
> -- estimated number of pages that will be freed
> 
> [..]
> class-512 objs:544 inuse:540 maxobj-per-zspage:8  pages-per-zspage:1 zspages-to-free:0
>  ... class-512 compaction is useless. break
> class-496 objs:660 inuse:570 maxobj-per-zspage:33 pages-per-zspage:4 zspages-to-free:2
> class-496 objs:627 inuse:570 maxobj-per-zspage:33 pages-per-zspage:4 zspages-to-free:1
> class-496 objs:594 inuse:570 maxobj-per-zspage:33 pages-per-zspage:4 zspages-to-free:0
>  ... class-496 compaction is useless. break
> class-448 objs:657 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:4
> class-448 objs:648 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:3
> class-448 objs:639 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:2
> class-448 objs:630 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:1
> class-448 objs:621 inuse:617 maxobj-per-zspage:9  pages-per-zspage:1 zspages-to-free:0
>  ... class-448 compaction is useless. break
> class-432 objs:728 inuse:685 maxobj-per-zspage:28 pages-per-zspage:3 zspages-to-free:1
> class-432 objs:700 inuse:685 maxobj-per-zspage:28 pages-per-zspage:3 zspages-to-free:0
>  ... class-432 compaction is useless. break
> class-416 objs:819 inuse:705 maxobj-per-zspage:39 pages-per-zspage:4 zspages-to-free:2
> class-416 objs:780 inuse:705 maxobj-per-zspage:39 pages-per-zspage:4 zspages-to-free:1
> class-416 objs:741 inuse:705 maxobj-per-zspage:39 pages-per-zspage:4 zspages-to-free:0
>  ... class-416 compaction is useless. break
> class-400 objs:690 inuse:674 maxobj-per-zspage:10 pages-per-zspage:1 zspages-to-free:1
> class-400 objs:680 inuse:674 maxobj-per-zspage:10 pages-per-zspage:1 zspages-to-free:0
>  ... class-400 compaction is useless. break
> class-384 objs:736 inuse:709 maxobj-per-zspage:32 pages-per-zspage:3 zspages-to-free:0
>  ... class-384 compaction is useless. break
> [..]
> 
> Every "compaction is useless" indicates that we saved CPU cycles.
> 
> class-512 has
> 	544	object allocated
> 	540	objects used
> 	8	objects per-page
> 
> Even if we have a ALMOST_EMPTY zspage, we still don't have enough room to
> migrate all of its objects and free this zspage; so compaction will not
> make a lot of sense, it's better to just leave it as is.
> 
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>

A nit below.

> ---
>  mm/zsmalloc.c | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 036baa8..b7410c1 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1685,6 +1685,28 @@ static struct page *isolate_source_page(struct size_class *class)
>  	return page;
>  }
>  
> +/*
> + *
> + * Based on the number of unused allocated objects calculate
> + * and return the number of pages that we can free.
> + *
> + * Should be called under class->lock.
> + */
> +static unsigned long zs_can_compact(struct size_class *class)
> +{
> +	unsigned long obj_wasted;
> +
> +	if (!zs_stat_get(class, CLASS_ALMOST_EMPTY))
> +		return 0;
> +
> +	obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) -
> +		zs_stat_get(class, OBJ_USED);
> +
> +	obj_wasted /= get_maxobj_per_zspage(class->size,
> +			class->pages_per_zspage);

Normally, I insert blank line before last return and I look at
other code under mm, it seems to be a common rule.

> +	return obj_wasted * get_pages_per_zspage(class->size);
> +}
> +
>  static unsigned long __zs_compact(struct zs_pool *pool,
>  				struct size_class *class)
>  {
> @@ -1698,6 +1720,9 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  
>  		BUG_ON(!is_first_page(src_page));
>  
> +		if (!zs_can_compact(class))
> +			break;
> +
>  		cc.index = 0;
>  		cc.s_page = src_page;
>  
> -- 
> 2.4.5
> 

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api
  2015-07-07 11:56 ` [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api Sergey Senozhatsky
@ 2015-07-07 13:36   ` Minchan Kim
  2015-07-07 14:32     ` Sergey Senozhatsky
  0 siblings, 1 reply; 23+ messages in thread
From: Minchan Kim @ 2015-07-07 13:36 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

On Tue, Jul 07, 2015 at 08:56:59PM +0900, Sergey Senozhatsky wrote:
> `zs_compact_control' accounts the number of migrated objects but
> it has a limited lifespan -- we lose it as soon as zs_compaction()
> returns back to zram. It worked fine, because (a) zram had it's own
> counter of migrated objects and (b) only zram could trigger
> compaction. However, this does not work for automatic pool
> compaction (not issued by zram). To account objects migrated
> during auto-compaction (issued by the shrinker) we need to store
> this number in zs_pool.
> 
> Define a new `struct zs_pool_stats' structure to keep zs_pool's
> stats there. It provides only `num_migrated', as of this writing,
> but it surely can be extended.
> 
> A new zsmalloc zs_pool_stats() symbol exports zs_pool's stats
> back to caller.
> 
> Use zs_pool_stats() in zram and remove `num_migrated' from
> zram_stats.
> 
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> Suggested-by: Minchan Kim <minchan@kernel.org>
> ---
>  drivers/block/zram/zram_drv.c | 11 ++++++-----
>  drivers/block/zram/zram_drv.h |  1 -
>  include/linux/zsmalloc.h      |  6 ++++++
>  mm/zsmalloc.c                 | 42 ++++++++++++++++++++++--------------------
>  4 files changed, 34 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index fb655e8..aa22fe07 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -388,7 +388,6 @@ static ssize_t comp_algorithm_store(struct device *dev,
>  static ssize_t compact_store(struct device *dev,
>  		struct device_attribute *attr, const char *buf, size_t len)
>  {
> -	unsigned long nr_migrated;
>  	struct zram *zram = dev_to_zram(dev);
>  	struct zram_meta *meta;
>  
> @@ -399,8 +398,7 @@ static ssize_t compact_store(struct device *dev,
>  	}
>  
>  	meta = zram->meta;
> -	nr_migrated = zs_compact(meta->mem_pool);
> -	atomic64_add(nr_migrated, &zram->stats.num_migrated);
> +	zs_compact(meta->mem_pool);
>  	up_read(&zram->init_lock);
>  
>  	return len;
> @@ -428,13 +426,16 @@ static ssize_t mm_stat_show(struct device *dev,
>  		struct device_attribute *attr, char *buf)
>  {
>  	struct zram *zram = dev_to_zram(dev);
> +	struct zs_pool_stats pool_stats = {0};

Does it work even if first member of the structure is non-scalar?
Personally I prefer memset for initliazation.
I believe modern compiler would optimize that quite well.


>  	u64 orig_size, mem_used = 0;
>  	long max_used;
>  	ssize_t ret;
>  
>  	down_read(&zram->init_lock);
> -	if (init_done(zram))
> +	if (init_done(zram)) {
>  		mem_used = zs_get_total_pages(zram->meta->mem_pool);
> +		zs_pool_stats(zram->meta->mem_pool, &pool_stats);
> +	}
>  
>  	orig_size = atomic64_read(&zram->stats.pages_stored);
>  	max_used = atomic_long_read(&zram->stats.max_used_pages);
> @@ -447,7 +448,7 @@ static ssize_t mm_stat_show(struct device *dev,
>  			zram->limit_pages << PAGE_SHIFT,
>  			max_used << PAGE_SHIFT,
>  			(u64)atomic64_read(&zram->stats.zero_pages),
> -			(u64)atomic64_read(&zram->stats.num_migrated));
> +			pool_stats.num_migrated);
>  	up_read(&zram->init_lock);
>  
>  	return ret;
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index 6dbe2df..8e92339 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -78,7 +78,6 @@ struct zram_stats {
>  	atomic64_t compr_data_size;	/* compressed size of pages stored */
>  	atomic64_t num_reads;	/* failed + successful */
>  	atomic64_t num_writes;	/* --do-- */
> -	atomic64_t num_migrated;	/* no. of migrated object */
>  	atomic64_t failed_reads;	/* can happen when memory is too low */
>  	atomic64_t failed_writes;	/* can happen when memory is too low */
>  	atomic64_t invalid_io;	/* non-page-aligned I/O requests */
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index 1338190..9340fce 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -34,6 +34,11 @@ enum zs_mapmode {
>  	 */
>  };
>  
> +struct zs_pool_stats {
> +	/* How many objects were migrated */
> +	u64		num_migrated;
> +};
> +
>  struct zs_pool;
>  
>  struct zs_pool *zs_create_pool(char *name, gfp_t flags);
> @@ -49,4 +54,5 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>  unsigned long zs_get_total_pages(struct zs_pool *pool);
>  unsigned long zs_compact(struct zs_pool *pool);
>  
> +void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats);
>  #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index ce1484e..db3cb2d 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -237,16 +237,18 @@ struct link_free {
>  };
>  
>  struct zs_pool {
> -	char *name;
> +	char			*name;

huge tab?

>  
> -	struct size_class **size_class;
> -	struct kmem_cache *handle_cachep;
> +	struct size_class	**size_class;
> +	struct kmem_cache	*handle_cachep;

tab?
tab?

>  
> -	gfp_t flags;	/* allocation flags used when growing pool */
> -	atomic_long_t pages_allocated;

Why changes comment position?

> +	/* Allocation flags used when growing pool */
> +	gfp_t			flags;
> +	atomic_long_t		pages_allocated;
>  

Why blank line?

> +	struct zs_pool_stats	stats;
>  #ifdef CONFIG_ZSMALLOC_STAT
> -	struct dentry *stat_dentry;
> +	struct dentry		*stat_dentry;

Tab.

>  #endif
>  };
>  
> @@ -1587,7 +1589,7 @@ struct zs_compact_control {
>  	 /* Starting object index within @s_page which used for live object
>  	  * in the subpage. */
>  	int index;
> -	/* how many of objects are migrated */
> +	/* How many of objects were migrated */
>  	int nr_migrated;
>  };
>  
> @@ -1599,7 +1601,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>  	struct page *s_page = cc->s_page;
>  	struct page *d_page = cc->d_page;
>  	unsigned long index = cc->index;
> -	int nr_migrated = 0;
>  	int ret = 0;
>  
>  	while (1) {
> @@ -1626,13 +1627,12 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>  		record_obj(handle, free_obj);
>  		unpin_tag(handle);
>  		obj_free(pool, class, used_obj);
> -		nr_migrated++;
> +		cc->nr_migrated++;
>  	}
>  
>  	/* Remember last position in this iteration */
>  	cc->s_page = s_page;
>  	cc->index = index;
> -	cc->nr_migrated = nr_migrated;
>  
>  	return ret;
>  }
> @@ -1707,14 +1707,13 @@ static unsigned long zs_can_compact(struct size_class *class)
>  	return obj_wasted * get_pages_per_zspage(class->size);
>  }
>  
> -static unsigned long __zs_compact(struct zs_pool *pool,
> -				struct size_class *class)
> +static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  {
>  	struct zs_compact_control cc;
>  	struct page *src_page;
>  	struct page *dst_page = NULL;
> -	unsigned long nr_total_migrated = 0;
>  
> +	cc.nr_migrated = 0;
>  	spin_lock(&class->lock);
>  	while ((src_page = isolate_source_page(class))) {
>  
> @@ -1736,7 +1735,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  				break;
>  
>  			putback_zspage(pool, class, dst_page);
> -			nr_total_migrated += cc.nr_migrated;
>  		}
>  
>  		/* Stop if we couldn't find slot */
> @@ -1746,7 +1744,6 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  		putback_zspage(pool, class, dst_page);
>  		putback_zspage(pool, class, src_page);
>  		spin_unlock(&class->lock);
> -		nr_total_migrated += cc.nr_migrated;
>  		cond_resched();
>  		spin_lock(&class->lock);
>  	}
> @@ -1754,15 +1751,14 @@ static unsigned long __zs_compact(struct zs_pool *pool,
>  	if (src_page)
>  		putback_zspage(pool, class, src_page);
>  
> -	spin_unlock(&class->lock);
> +	pool->stats.num_migrated += cc.nr_migrated;
>  
> -	return nr_total_migrated;
> +	spin_unlock(&class->lock);
>  }
>  
>  unsigned long zs_compact(struct zs_pool *pool)
>  {
>  	int i;
> -	unsigned long nr_migrated = 0;
>  	struct size_class *class;
>  
>  	for (i = zs_size_classes - 1; i >= 0; i--) {
> @@ -1771,13 +1767,19 @@ unsigned long zs_compact(struct zs_pool *pool)
>  			continue;
>  		if (class->index != i)
>  			continue;
> -		nr_migrated += __zs_compact(pool, class);
> +		__zs_compact(pool, class);
>  	}
>  
> -	return nr_migrated;
> +	return pool->stats.num_migrated;
>  }
>  EXPORT_SYMBOL_GPL(zs_compact);
>  
> +void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats)
> +{
> +	memcpy(stats, &pool->stats, sizeof(struct zs_pool_stats));
> +}
> +EXPORT_SYMBOL_GPL(zs_pool_stats);
> +
>  /**
>   * zs_create_pool - Creates an allocation pool to work from.
>   * @flags: allocation flags used to allocate pool metadata
> -- 
> 2.4.5
> 

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 6/7] zsmalloc: account the number of compacted pages
  2015-07-07 11:57 ` [PATCH v6 6/7] zsmalloc: account the number of compacted pages Sergey Senozhatsky
@ 2015-07-07 13:39   ` Minchan Kim
  2015-07-07 14:21     ` Sergey Senozhatsky
  0 siblings, 1 reply; 23+ messages in thread
From: Minchan Kim @ 2015-07-07 13:39 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

On Tue, Jul 07, 2015 at 08:57:00PM +0900, Sergey Senozhatsky wrote:
> Compaction returns back to zram the number of migrated objects,
> which is quite uninformative -- we have objects of different
> sizes so user space cannot obtain any valuable data from that
> number. Change compaction to operate in terms of pages and
> return back to compaction issuer the number of pages that
> were freed during compaction. So from now on we will export
> more meaningful value in zram<id>/mm_stat -- the number of freed
> (compacted) pages.
> 
> This requires:
> (a) a rename of `num_migrated' to 'pages_compacted'
> (b) a internal API change -- return first_page's fullness_group
> from putback_zspage(), so we know when putback_zspage() did
> free_zspage(). It helps us to account compaction stats correctly.
> 
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>

A nit below.

> ---
>  Documentation/blockdev/zram.txt |  3 ++-
>  drivers/block/zram/zram_drv.c   |  2 +-
>  include/linux/zsmalloc.h        |  4 ++--
>  mm/zsmalloc.c                   | 27 +++++++++++++++++----------
>  4 files changed, 22 insertions(+), 14 deletions(-)
> 
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index c4de576..62435bb 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -144,7 +144,8 @@ mem_used_max      RW    the maximum amount memory zram have consumed to
>                          store compressed data
>  mem_limit         RW    the maximum amount of memory ZRAM can use to store
>                          the compressed data
> -num_migrated      RO    the number of objects migrated migrated by compaction
> +pages_compacted   RO    the number of pages freed during compaction
> +                        (available only via zram<id>/mm_stat node)
>  compact           WO    trigger memory compaction
>  
>  WARNING
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index aa22fe07..1bcbc19 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -448,7 +448,7 @@ static ssize_t mm_stat_show(struct device *dev,
>  			zram->limit_pages << PAGE_SHIFT,
>  			max_used << PAGE_SHIFT,
>  			(u64)atomic64_read(&zram->stats.zero_pages),
> -			pool_stats.num_migrated);
> +			pool_stats.pages_compacted);
>  	up_read(&zram->init_lock);
>  
>  	return ret;
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index 9340fce..cda4ad4 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -35,8 +35,8 @@ enum zs_mapmode {
>  };
>  
>  struct zs_pool_stats {
> -	/* How many objects were migrated */
> -	u64		num_migrated;
> +	/* How many pages were migrated (freed) */
> +	u64		pages_compacted;

Hmm, if we account it as page unit, unsigned long is enough for 32bit.

>  };
>  
>  struct zs_pool;
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index db3cb2d..13f2c4a 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1589,8 +1589,6 @@ struct zs_compact_control {
>  	 /* Starting object index within @s_page which used for live object
>  	  * in the subpage. */
>  	int index;
> -	/* How many of objects were migrated */
> -	int nr_migrated;
>  };
>  
>  static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> @@ -1627,7 +1625,6 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
>  		record_obj(handle, free_obj);
>  		unpin_tag(handle);
>  		obj_free(pool, class, used_obj);
> -		cc->nr_migrated++;
>  	}
>  
>  	/* Remember last position in this iteration */
> @@ -1653,8 +1650,17 @@ static struct page *isolate_target_page(struct size_class *class)
>  	return page;
>  }
>  
> -static void putback_zspage(struct zs_pool *pool, struct size_class *class,
> -				struct page *first_page)
> +/*
> + * putback_zspage - add @first_page into right class's fullness list
> + * @pool: target pool
> + * @class: destination class
> + * @first_page: target page
> + *
> + * Return @fist_page's fullness_group
> + */
> +static enum fullness_group putback_zspage(struct zs_pool *pool,
> +			struct size_class *class,
> +			struct page *first_page)
>  {
>  	enum fullness_group fullness;
>  
> @@ -1672,6 +1678,8 @@ static void putback_zspage(struct zs_pool *pool, struct size_class *class,
>  
>  		free_zspage(first_page);
>  	}
> +
> +	return fullness;
>  }
>  
>  static struct page *isolate_source_page(struct size_class *class)
> @@ -1713,7 +1721,6 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  	struct page *src_page;
>  	struct page *dst_page = NULL;
>  
> -	cc.nr_migrated = 0;
>  	spin_lock(&class->lock);
>  	while ((src_page = isolate_source_page(class))) {
>  
> @@ -1742,7 +1749,9 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  			break;
>  
>  		putback_zspage(pool, class, dst_page);
> -		putback_zspage(pool, class, src_page);
> +		if (putback_zspage(pool, class, src_page) == ZS_EMPTY)
> +			pool->stats.pages_compacted +=
> +				get_pages_per_zspage(class->size);
>  		spin_unlock(&class->lock);
>  		cond_resched();
>  		spin_lock(&class->lock);
> @@ -1751,8 +1760,6 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
>  	if (src_page)
>  		putback_zspage(pool, class, src_page);
>  
> -	pool->stats.num_migrated += cc.nr_migrated;
> -
>  	spin_unlock(&class->lock);
>  }
>  
> @@ -1770,7 +1777,7 @@ unsigned long zs_compact(struct zs_pool *pool)
>  		__zs_compact(pool, class);
>  	}
>  
> -	return pool->stats.num_migrated;
> +	return pool->stats.pages_compacted;
>  }
>  EXPORT_SYMBOL_GPL(zs_compact);
>  
> -- 
> 2.4.5
> 

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-07 11:57 ` [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction Sergey Senozhatsky
@ 2015-07-07 13:44   ` Minchan Kim
  2015-07-07 14:41     ` Sergey Senozhatsky
  0 siblings, 1 reply; 23+ messages in thread
From: Minchan Kim @ 2015-07-07 13:44 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

On Tue, Jul 07, 2015 at 08:57:01PM +0900, Sergey Senozhatsky wrote:
> Perform automatic pool compaction by a shrinker when system
> is getting tight on memory.
> 
> User-space has a very little knowledge regarding zsmalloc fragmentation
> and basically has no mechanism to tell whether compaction will result
> in any memory gain. Another issue is that user space is not always
> aware of the fact that system is getting tight on memory. Which leads
> to very uncomfortable scenarios when user space may start issuing
> compaction 'randomly' or from crontab (for example). Fragmentation
> is not always necessarily bad, allocated and unused objects, after all,
> may be filled with the data later, w/o the need of allocating a new
> zspage. On the other hand, we obviously don't want to waste memory
> when the system needs it.
> 
> Compaction now has a relatively quick pool scan so we are able to
> estimate the number of pages that will be freed easily, which makes it
> possible to call this function from a shrinker->count_objects() callback.
> We also abort compaction as soon as we detect that we can't free any
> pages any more, preventing wasteful objects migrations.
> 
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> Suggested-by: Minchan Kim <minchan@kernel.org>


There is one suggestion. Please see below.

> ---
>  mm/zsmalloc.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 74 insertions(+)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 13f2c4a..83b2e97 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -247,6 +247,10 @@ struct zs_pool {
>  	atomic_long_t		pages_allocated;
>  
>  	struct zs_pool_stats	stats;
> +
> +	/* Compact classes */
> +	struct shrinker		shrinker;
> +	bool			shrinker_enabled;
>  #ifdef CONFIG_ZSMALLOC_STAT
>  	struct dentry		*stat_dentry;
>  #endif
> @@ -1787,6 +1791,69 @@ void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats)
>  }
>  EXPORT_SYMBOL_GPL(zs_pool_stats);
>  
> +static unsigned long zs_shrinker_scan(struct shrinker *shrinker,
> +		struct shrink_control *sc)
> +{
> +	unsigned long pages_freed;
> +	struct zs_pool *pool = container_of(shrinker, struct zs_pool,
> +			shrinker);
> +
> +	pages_freed = pool->stats.pages_compacted;
> +	/*
> +	 * Compact classes and calculate compaction delta.
> +	 * Can run concurrently with a manually triggered
> +	 * (by user) compaction.
> +	 */
> +	pages_freed = zs_compact(pool) - pages_freed;
> +
> +	return pages_freed ? pages_freed : SHRINK_STOP;
> +}
> +
> +static unsigned long zs_shrinker_count(struct shrinker *shrinker,
> +		struct shrink_control *sc)
> +{
> +	int i;
> +	struct size_class *class;
> +	unsigned long pages_to_free = 0;
> +	struct zs_pool *pool = container_of(shrinker, struct zs_pool,
> +			shrinker);
> +
> +	if (!pool->shrinker_enabled)
> +		return 0;
> +
> +	for (i = zs_size_classes - 1; i >= 0; i--) {
> +		class = pool->size_class[i];
> +		if (!class)
> +			continue;
> +		if (class->index != i)
> +			continue;
> +
> +		spin_lock(&class->lock);
> +		pages_to_free += zs_can_compact(class);
> +		spin_unlock(&class->lock);
> +	}
> +
> +	return pages_to_free;
> +}
> +
> +static void zs_unregister_shrinker(struct zs_pool *pool)
> +{
> +	if (pool->shrinker_enabled) {
> +		unregister_shrinker(&pool->shrinker);
> +		pool->shrinker_enabled = false;
> +	}
> +}
> +
> +static int zs_register_shrinker(struct zs_pool *pool)
> +{
> +	pool->shrinker.scan_objects = zs_shrinker_scan;
> +	pool->shrinker.count_objects = zs_shrinker_count;
> +	pool->shrinker.batch = 0;
> +	pool->shrinker.seeks = DEFAULT_SEEKS;
> +
> +	return register_shrinker(&pool->shrinker);
> +}
> +
>  /**
>   * zs_create_pool - Creates an allocation pool to work from.
>   * @flags: allocation flags used to allocate pool metadata
> @@ -1872,6 +1939,12 @@ struct zs_pool *zs_create_pool(char *name, gfp_t flags)
>  	if (zs_pool_stat_create(name, pool))
>  		goto err;
>  
> +	/*
> +	 * Not critical, we still can use the pool
> +	 * and user can trigger compaction manually.
> +	 */
> +	if (zs_register_shrinker(pool) == 0)
> +		pool->shrinker_enabled = true;

IMO, there is no value to maintain just in case of
failing register_shrinker in practice.

Let's remove shrinker_enabled and abort pool creation if shrinker register
is failed.

Tomorrow, I will test this patchset and add Acked-by if it pass.

Thanks!


>  	return pool;
>  
>  err:
> @@ -1884,6 +1957,7 @@ void zs_destroy_pool(struct zs_pool *pool)
>  {
>  	int i;
>  
> +	zs_unregister_shrinker(pool);
>  	zs_pool_stat_destroy(pool);
>  
>  	for (i = 0; i < zs_size_classes; i++) {
> -- 
> 2.4.5
> 

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 6/7] zsmalloc: account the number of compacted pages
  2015-07-07 13:39   ` Minchan Kim
@ 2015-07-07 14:21     ` Sergey Senozhatsky
  2015-07-07 14:33       ` Minchan Kim
  0 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 14:21 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel,
	Sergey Senozhatsky

On (07/07/15 22:39), Minchan Kim wrote:
[..]
> >  struct zs_pool_stats {
> > -	/* How many objects were migrated */
> > -	u64		num_migrated;
> > +	/* How many pages were migrated (freed) */
> > +	u64		pages_compacted;
> 
> Hmm, if we account it as page unit, unsigned long is enough for 32bit.

Well, this is a 'how many pages were freed overall' counter. We don't
control the lifetime of device, so I think it can be bigger than 4 bytes
in some `extreme' cases.

	-ss

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api
  2015-07-07 13:36   ` Minchan Kim
@ 2015-07-07 14:32     ` Sergey Senozhatsky
  2015-07-07 14:48       ` Minchan Kim
  0 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 14:32 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel,
	Sergey Senozhatsky

On (07/07/15 22:36), Minchan Kim wrote:
[..]
> >  	struct zram *zram = dev_to_zram(dev);
> > +	struct zs_pool_stats pool_stats = {0};
> 
> Does it work even if first member of the structure is non-scalar?
> Personally I prefer memset for initliazation.
> I believe modern compiler would optimize that quite well.

zs_pool_stats contains only one member now, so I didn't bother.

[..]
> >  struct zs_pool {
> > -	char *name;
> > +	char			*name;
> 
> huge tab?
> 
> >  
> > -	struct size_class **size_class;
> > -	struct kmem_cache *handle_cachep;
> > +	struct size_class	**size_class;
> > +	struct kmem_cache	*handle_cachep;
> 
> tab?
> tab?
> 
> >  
> > -	gfp_t flags;	/* allocation flags used when growing pool */
> > -	atomic_long_t pages_allocated;
> 
> Why changes comment position?

Because otherwise it breaks 80-cols rule.

> > +	/* Allocation flags used when growing pool */
> > +	gfp_t			flags;
> > +	atomic_long_t		pages_allocated;
> >  
> 
> Why blank line?

To make it more readable? Separating logically different
struct members. That's why the original code contains blank
lines between `char *name' and `struct size_class **size_class;
struct kmem_cache *handle_cachep;` and so on.

I see no issue.


> > +	struct zs_pool_stats	stats;
> >  #ifdef CONFIG_ZSMALLOC_STAT
> > -	struct dentry *stat_dentry;
> > +	struct dentry		*stat_dentry;
> 
> Tab.

Well, I see no issue with aligned struct members. Looks less
hairy and less messy than the original one.

clean:

struct zs_pool {
        char                    *name;

        struct size_class       **size_class;
        struct kmem_cache       *handle_cachep;

        /* Allocation flags used when growing pool */
        gfp_t                   flags;
        atomic_long_t           pages_allocated;

        struct zs_pool_stats    stats;

        /* Compact classes */
        struct shrinker         shrinker;
        bool                    shrinker_enabled;
#ifdef CONFIG_ZSMALLOC_STAT
        struct dentry           *stat_dentry;
#endif
};



dirty:

struct zs_pool {
        char *name;

        struct size_class **size_class;
        struct kmem_cache *handle_cachep;

        gfp_t flags;    /* allocation flags used when growing pool */
        atomic_long_t pages_allocated;

#ifdef CONFIG_ZSMALLOC_STAT
        struct dentry *stat_dentry;
#endif
};

	-ss

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 6/7] zsmalloc: account the number of compacted pages
  2015-07-07 14:21     ` Sergey Senozhatsky
@ 2015-07-07 14:33       ` Minchan Kim
  0 siblings, 0 replies; 23+ messages in thread
From: Minchan Kim @ 2015-07-07 14:33 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

On Tue, Jul 07, 2015 at 11:21:09PM +0900, Sergey Senozhatsky wrote:
> On (07/07/15 22:39), Minchan Kim wrote:
> [..]
> > >  struct zs_pool_stats {
> > > -	/* How many objects were migrated */
> > > -	u64		num_migrated;
> > > +	/* How many pages were migrated (freed) */
> > > +	u64		pages_compacted;
> > 
> > Hmm, if we account it as page unit, unsigned long is enough for 32bit.
> 
> Well, this is a 'how many pages were freed overall' counter. We don't
> control the lifetime of device, so I think it can be bigger than 4 bytes
> in some `extreme' cases.

Technically, you're right but we have been used "unsigned long" for
seval mm stats for a long time. For exmaple, vm_event_item.
It includes many stats accumulated for the system running time.
I don't think zsmalloc is special.

> 
> 	-ss

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-07 13:44   ` Minchan Kim
@ 2015-07-07 14:41     ` Sergey Senozhatsky
  2015-07-07 15:01       ` Minchan Kim
  0 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 14:41 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel,
	Sergey Senozhatsky

On (07/07/15 22:44), Minchan Kim wrote:
[..]
> IMO, there is no value to maintain just in case of
> failing register_shrinker in practice.
> 
> Let's remove shrinker_enabled and abort pool creation if shrinker register
> is failed.

But why would we do this? Yes, it's kinda-sorta bad -- we were not
able to register zspool shrinker, so there will be no automatic
compaction... And that's it.

It does not affect zsmalloc/zram functionality by any means. Including
compaction itself -- user still has a way to compact zspool (manually).
And in some scenarios user will never even see automatic compaction in
action (assuming that there is a plenty of RAM available).

Can you explain your decision?

	-ss

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api
  2015-07-07 14:32     ` Sergey Senozhatsky
@ 2015-07-07 14:48       ` Minchan Kim
  2015-07-07 15:02         ` Sergey Senozhatsky
  0 siblings, 1 reply; 23+ messages in thread
From: Minchan Kim @ 2015-07-07 14:48 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

On Tue, Jul 07, 2015 at 11:32:56PM +0900, Sergey Senozhatsky wrote:
> On (07/07/15 22:36), Minchan Kim wrote:
> [..]
> > >  	struct zram *zram = dev_to_zram(dev);
> > > +	struct zs_pool_stats pool_stats = {0};
> > 
> > Does it work even if first member of the structure is non-scalar?
> > Personally I prefer memset for initliazation.
> > I believe modern compiler would optimize that quite well.
> 
> zs_pool_stats contains only one member now, so I didn't bother.
> 
> [..]
> > >  struct zs_pool {
> > > -	char *name;
> > > +	char			*name;
> > 
> > huge tab?
> > 
> > >  
> > > -	struct size_class **size_class;
> > > -	struct kmem_cache *handle_cachep;
> > > +	struct size_class	**size_class;
> > > +	struct kmem_cache	*handle_cachep;
> > 
> > tab?
> > tab?
> > 
> > >  
> > > -	gfp_t flags;	/* allocation flags used when growing pool */
> > > -	atomic_long_t pages_allocated;
> > 
> > Why changes comment position?
> 
> Because otherwise it breaks 80-cols rule.
> 
> > > +	/* Allocation flags used when growing pool */
> > > +	gfp_t			flags;
> > > +	atomic_long_t		pages_allocated;
> > >  
> > 
> > Why blank line?
> 
> To make it more readable? Separating logically different
> struct members. That's why the original code contains blank
> lines between `char *name' and `struct size_class **size_class;
> struct kmem_cache *handle_cachep;` and so on.
> 
> I see no issue.
> 

Okay, I am not against aboves you mentioned.
But please don't squeeze cleanup patch into core patchset from next time.
It really hate to review and make confused git-blame.

> 
> > > +	struct zs_pool_stats	stats;
> > >  #ifdef CONFIG_ZSMALLOC_STAT
> > > -	struct dentry *stat_dentry;
> > > +	struct dentry		*stat_dentry;
> > 
> > Tab.
> 
> Well, I see no issue with aligned struct members. Looks less
> hairy and less messy than the original one.

But this is that I'm strongly against with you.
It depends on the person coding style.

I have been used white space.
As well, when I look at current code under mm which I'm getting used,
almost everything use just white space.

> 
> clean:
> 
> struct zs_pool {
>         char                    *name;
> 
>         struct size_class       **size_class;
>         struct kmem_cache       *handle_cachep;
> 
>         /* Allocation flags used when growing pool */
>         gfp_t                   flags;
>         atomic_long_t           pages_allocated;
> 
>         struct zs_pool_stats    stats;
> 
>         /* Compact classes */
>         struct shrinker         shrinker;
>         bool                    shrinker_enabled;
> #ifdef CONFIG_ZSMALLOC_STAT
>         struct dentry           *stat_dentry;
> #endif
> };
> 
> 
> 
> dirty:

Never dirty. It's more readable.

> 
> struct zs_pool {
>         char *name;
> 
>         struct size_class **size_class;
>         struct kmem_cache *handle_cachep;
> 
>         gfp_t flags;    /* allocation flags used when growing pool */
>         atomic_long_t pages_allocated;
> 
> #ifdef CONFIG_ZSMALLOC_STAT
>         struct dentry *stat_dentry;
> #endif
> };
> 
> 	-ss

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-07 14:41     ` Sergey Senozhatsky
@ 2015-07-07 15:01       ` Minchan Kim
  2015-07-07 15:12         ` Sergey Senozhatsky
  0 siblings, 1 reply; 23+ messages in thread
From: Minchan Kim @ 2015-07-07 15:01 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

On Tue, Jul 07, 2015 at 11:41:07PM +0900, Sergey Senozhatsky wrote:
> On (07/07/15 22:44), Minchan Kim wrote:
> [..]
> > IMO, there is no value to maintain just in case of
> > failing register_shrinker in practice.
> > 
> > Let's remove shrinker_enabled and abort pool creation if shrinker register
> > is failed.
> 
> But why would we do this? Yes, it's kinda-sorta bad -- we were not
> able to register zspool shrinker, so there will be no automatic
> compaction... And that's it.
> 
> It does not affect zsmalloc/zram functionality by any means. Including
> compaction itself -- user still has a way to compact zspool (manually).
> And in some scenarios user will never even see automatic compaction in
> action (assuming that there is a plenty of RAM available).
> 
> Can you explain your decision?

I don't think it would fail in *real practice*.
Althout it might happen, what does zram could help in that cases?

If it were failed, it means there is already little memory on the system
so zram could not be helpful for those environment.
IOW, zram should be enabled earlier.

If you want it strongly, please reproduce such failing and prove that
zram was helpful for the system.

on that situation.



> 
> 	-ss

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api
  2015-07-07 14:48       ` Minchan Kim
@ 2015-07-07 15:02         ` Sergey Senozhatsky
  0 siblings, 0 replies; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 15:02 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel,
	Sergey Senozhatsky

On (07/07/15 23:48), Minchan Kim wrote:
[..]
> I have been used white space.
> As well, when I look at current code under mm which I'm getting used,
> almost everything use just white space.
> 

OK, don't want to engage into a meaningless discussion here.
I'll rollback those hunks.

	-ss

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-07 15:01       ` Minchan Kim
@ 2015-07-07 15:12         ` Sergey Senozhatsky
  2015-07-08  2:18           ` Sergey Senozhatsky
  0 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-07 15:12 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel,
	Sergey Senozhatsky

On (07/08/15 00:01), Minchan Kim wrote:
[..]
> > But why would we do this? Yes, it's kinda-sorta bad -- we were not
> > able to register zspool shrinker, so there will be no automatic
> > compaction... And that's it.
> > 
> > It does not affect zsmalloc/zram functionality by any means. Including
> > compaction itself -- user still has a way to compact zspool (manually).
> > And in some scenarios user will never even see automatic compaction in
> > action (assuming that there is a plenty of RAM available).
> > 
> > Can you explain your decision?
> 
> I don't think it would fail in *real practice*.
> Althout it might happen, what does zram could help in that cases?
> 

This argument depends on the current register_shrinker() implementation,
should some one add additional return branch there and it's done.

> If it were failed, it means there is already little memory on the system
> so zram could not be helpful for those environment.
> IOW, zram should be enabled earlier.
> 
> If you want it strongly, please reproduce such failing and prove that
> zram was helpful for the system.

No, thanks. I'll just remove it.

	-ss

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-07 15:12         ` Sergey Senozhatsky
@ 2015-07-08  2:18           ` Sergey Senozhatsky
  2015-07-08  3:04             ` Minchan Kim
  0 siblings, 1 reply; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-08  2:18 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky,
	Sergey Senozhatsky

On (07/08/15 00:12), Sergey Senozhatsky wrote:
> > I don't think it would fail in *real practice*.
> > Althout it might happen, what does zram could help in that cases?
> > 
> 
> This argument depends on the current register_shrinker() implementation,
> should some one add additional return branch there and it's done.
> 
> > If it were failed, it means there is already little memory on the system
> > so zram could not be helpful for those environment.
> > IOW, zram should be enabled earlier.
> > 
> > If you want it strongly, please reproduce such failing and prove that
> > zram was helpful for the system.
> 
> No, thanks. I'll just remove it.
> 

hm... This makes error path a bit ugly. What we have now is
pretty straight forward

... zs_create_pool(char *name, gfp_t flags)
{
	..
	if (zs_register_shrinker(pool) == 0)
		pool->shrinker_enabled = true;
	..
err:
	zs_destroy_pool(pool);
	return NULL;
}

zs_destroy_pool() does a destruction. It performs unconditional
zs_unregister_shrinker(), which does unregister_shrinker() _if needed_.

Shrinker API does not handle nicely unregister_shrinker() on a not-registered
->shrinker. And error path can be triggered even before we do register_shrinker(),
so we can't 'fix' unregister_shrinker() in a common way, doing something like

 void unregister_shrinker(struct shrinker *shrinker)
 {
+       if (!unlikely(shrinker->nr_deferred))
+               return;
+
        down_write(&shrinker_rwsem);
        list_del(&shrinker->list);
        up_write(&shrinker_rwsem);


(just for example), because someone can accidentally pass a dirty (not zeroed
out) `struct shrinker'. e.g.

struct foo {
	const char *b;
...
	struct shrinker s;
};

void bar(void)
{
	struct foo *f = kmalloc(...);

	if (!f)
		return;

	f->a = kmalloc(...);
	if (!f->a)
		goto err;

err:
	unregister_shrinker(f->s);
			^^^^^^ boom
	...
}



So... options:

(a) we need something to signify that zs_unregister_shrinker() was successful

or

(b) factor out 'core' part of zs_destroy_pool() and do a full destruction when
called from the outside (from zram for example), or a partial destruction when
called from zs_create_pool() error path.



or

(c) introduce INIT_SHRINKER macro to init `struct shrinker' internal
members

(!!! composed in email client, not tested !!!)

include/linux/shrinker.h

#define INIT_SHRINKER(s)			\
	do {					\
		(s)->nr_deferred = NULL;	\
		INIT_LIST_HEAD(&(s)->list);	\
	} while (0)


and do

struct zs_pool *zs_create_pool(char *name, gfp_t flags)
{
	..
	INIT_SHRINKER(&pool->shrinker);

	pool->name = kstrdup(name, GFP_KERNEL);
	..
}



Looking at shrinker users, they all have to carry on some sort of
a flag telling that "unregister_shrinker()" will not blow up... or
just be fishy... like

 int ldlm_pools_init(void)
 {
         int rc;

         rc = ldlm_pools_thread_start();
         if (rc == 0) {
                 register_shrinker(&ldlm_pools_srv_shrinker);
                 register_shrinker(&ldlm_pools_cli_shrinker);
         }
         return rc;
 }
 EXPORT_SYMBOL(ldlm_pools_init);

 void ldlm_pools_fini(void)
 {
         unregister_shrinker(&ldlm_pools_srv_shrinker);
         unregister_shrinker(&ldlm_pools_cli_shrinker);
         ldlm_pools_thread_stop();
 }
 EXPORT_SYMBOL(ldlm_pools_fini);



or access private members of the `struct shrinker', like


struct cache_set {
...
	struct shrinker		shrink;
...
};

 void bch_btree_cache_free(struct cache_set *c)
 {
         struct btree *b;
         struct closure cl;
         closure_init_stack(&cl);

         if (c->shrink.list.next)
                 unregister_shrinker(&c->shrink);


Note that `shrink.list.next' check.

	-ss

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-08  2:18           ` Sergey Senozhatsky
@ 2015-07-08  3:04             ` Minchan Kim
  2015-07-08  3:49               ` Sergey Senozhatsky
  0 siblings, 1 reply; 23+ messages in thread
From: Minchan Kim @ 2015-07-08  3:04 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Andrew Morton, linux-mm, linux-kernel, Sergey Senozhatsky

Hi Sergey,

On Wed, Jul 08, 2015 at 11:18:36AM +0900, Sergey Senozhatsky wrote:
> On (07/08/15 00:12), Sergey Senozhatsky wrote:
> > > I don't think it would fail in *real practice*.
> > > Althout it might happen, what does zram could help in that cases?
> > > 
> > 
> > This argument depends on the current register_shrinker() implementation,
> > should some one add additional return branch there and it's done.
> > 
> > > If it were failed, it means there is already little memory on the system
> > > so zram could not be helpful for those environment.
> > > IOW, zram should be enabled earlier.
> > > 
> > > If you want it strongly, please reproduce such failing and prove that
> > > zram was helpful for the system.
> > 
> > No, thanks. I'll just remove it.
> > 
> 
> hm... This makes error path a bit ugly. What we have now is
> pretty straight forward
> 
> ... zs_create_pool(char *name, gfp_t flags)
> {
> 	..
> 	if (zs_register_shrinker(pool) == 0)
> 		pool->shrinker_enabled = true;
> 	..
> err:
> 	zs_destroy_pool(pool);
> 	return NULL;
> }
> 
> zs_destroy_pool() does a destruction. It performs unconditional
> zs_unregister_shrinker(), which does unregister_shrinker() _if needed_.
> 
> Shrinker API does not handle nicely unregister_shrinker() on a not-registered
> ->shrinker. And error path can be triggered even before we do register_shrinker(),
> so we can't 'fix' unregister_shrinker() in a common way, doing something like
> 
>  void unregister_shrinker(struct shrinker *shrinker)
>  {
> +       if (!unlikely(shrinker->nr_deferred))
> +               return;
> +
>         down_write(&shrinker_rwsem);
>         list_del(&shrinker->list);
>         up_write(&shrinker_rwsem);
> 
> 
> (just for example), because someone can accidentally pass a dirty (not zeroed
> out) `struct shrinker'. e.g.
> 
> struct foo {
> 	const char *b;
> ...
> 	struct shrinker s;
> };
> 
> void bar(void)
> {
> 	struct foo *f = kmalloc(...);
> 
> 	if (!f)
> 		return;
> 
> 	f->a = kmalloc(...);
> 	if (!f->a)
> 		goto err;
> 
> err:
> 	unregister_shrinker(f->s);
> 			^^^^^^ boom
> 	...
> }
> 
> 

Yes, it's ugly.

> 
> So... options:
> 
> (a) we need something to signify that zs_unregister_shrinker() was successful

I think a) is simple way to handle it now.
I don't want to stuck with this issue.

Please comment out why we need such boolean so after someone who has interest
on shrinker clean-up is able to grab a chance.

Thanks!

> 
> or
> 
> (b) factor out 'core' part of zs_destroy_pool() and do a full destruction when
> called from the outside (from zram for example), or a partial destruction when
> called from zs_create_pool() error path.
> 
> 
> 
> or
> 
> (c) introduce INIT_SHRINKER macro to init `struct shrinker' internal
> members
> 
> (!!! composed in email client, not tested !!!)
> 
> include/linux/shrinker.h
> 
> #define INIT_SHRINKER(s)			\
> 	do {					\
> 		(s)->nr_deferred = NULL;	\
> 		INIT_LIST_HEAD(&(s)->list);	\
> 	} while (0)
> 
> 
> and do
> 
> struct zs_pool *zs_create_pool(char *name, gfp_t flags)
> {
> 	..
> 	INIT_SHRINKER(&pool->shrinker);
> 
> 	pool->name = kstrdup(name, GFP_KERNEL);
> 	..
> }
> 
> 
> 
> Looking at shrinker users, they all have to carry on some sort of
> a flag telling that "unregister_shrinker()" will not blow up... or
> just be fishy... like
> 
>  int ldlm_pools_init(void)
>  {
>          int rc;
> 
>          rc = ldlm_pools_thread_start();
>          if (rc == 0) {
>                  register_shrinker(&ldlm_pools_srv_shrinker);
>                  register_shrinker(&ldlm_pools_cli_shrinker);
>          }
>          return rc;
>  }
>  EXPORT_SYMBOL(ldlm_pools_init);
> 
>  void ldlm_pools_fini(void)
>  {
>          unregister_shrinker(&ldlm_pools_srv_shrinker);
>          unregister_shrinker(&ldlm_pools_cli_shrinker);
>          ldlm_pools_thread_stop();
>  }
>  EXPORT_SYMBOL(ldlm_pools_fini);
> 
> 
> 
> or access private members of the `struct shrinker', like
> 
> 
> struct cache_set {
> ...
> 	struct shrinker		shrink;
> ...
> };
> 
>  void bch_btree_cache_free(struct cache_set *c)
>  {
>          struct btree *b;
>          struct closure cl;
>          closure_init_stack(&cl);
> 
>          if (c->shrink.list.next)
>                  unregister_shrinker(&c->shrink);
> 
> 
> Note that `shrink.list.next' check.
> 
> 	-ss

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction
  2015-07-08  3:04             ` Minchan Kim
@ 2015-07-08  3:49               ` Sergey Senozhatsky
  0 siblings, 0 replies; 23+ messages in thread
From: Sergey Senozhatsky @ 2015-07-08  3:49 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, Andrew Morton, linux-mm, linux-kernel,
	Sergey Senozhatsky

On (07/08/15 12:04), Minchan Kim wrote:
> Hi Sergey,
> 

Hi Minchan,

[..]
> > (a) we need something to signify that zs_unregister_shrinker() was successful
> 
> I think a) is simple way to handle it now.
> I don't want to stuck with this issue.
> 
> Please comment out why we need such boolean so after someone who has interest
> on shrinker clean-up is able to grab a chance.


OK, sure.

Do you think that (c) deserves a separate discussion (I can fork a new
discussion thread and Cc more people). It does look like we can do a bit
better and make shrinker less fragile (and probably cleanup/fix several
places in the kernel).

	-ss

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2015-07-08  3:49 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-07 11:56 [PATCH v6 0/7] mm/zsmalloc: introduce automatic pool compaction Sergey Senozhatsky
2015-07-07 11:56 ` [PATCH v6 1/7] zsmalloc: drop unused variable `nr_to_migrate' Sergey Senozhatsky
2015-07-07 11:56 ` [PATCH v6 2/7] zsmalloc: always keep per-class stats Sergey Senozhatsky
2015-07-07 11:56 ` [PATCH v6 3/7] zsmalloc: introduce zs_can_compact() function Sergey Senozhatsky
2015-07-07 13:21   ` Minchan Kim
2015-07-07 11:56 ` [PATCH v6 4/7] zsmalloc: cosmetic compaction code adjustments Sergey Senozhatsky
2015-07-07 11:56 ` [PATCH v6 5/7] zsmalloc/zram: introduce zs_pool_stats api Sergey Senozhatsky
2015-07-07 13:36   ` Minchan Kim
2015-07-07 14:32     ` Sergey Senozhatsky
2015-07-07 14:48       ` Minchan Kim
2015-07-07 15:02         ` Sergey Senozhatsky
2015-07-07 11:57 ` [PATCH v6 6/7] zsmalloc: account the number of compacted pages Sergey Senozhatsky
2015-07-07 13:39   ` Minchan Kim
2015-07-07 14:21     ` Sergey Senozhatsky
2015-07-07 14:33       ` Minchan Kim
2015-07-07 11:57 ` [PATCH v6 7/7] zsmalloc: use shrinker to trigger auto-compaction Sergey Senozhatsky
2015-07-07 13:44   ` Minchan Kim
2015-07-07 14:41     ` Sergey Senozhatsky
2015-07-07 15:01       ` Minchan Kim
2015-07-07 15:12         ` Sergey Senozhatsky
2015-07-08  2:18           ` Sergey Senozhatsky
2015-07-08  3:04             ` Minchan Kim
2015-07-08  3:49               ` Sergey Senozhatsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).