linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
@ 2019-10-10 20:04 Vitaly Wool
  2019-10-10 20:09 ` [PATCH 1/3] zpool: extend API to match zsmalloc Vitaly Wool
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Vitaly Wool @ 2019-10-10 20:04 UTC (permalink / raw)
  To: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim
  Cc: Sergey Senozhatsky, LKML, Vlastimil Babka, Shakeel Butt,
	Henry Burns, Theodore Ts'o

The coming patchset is a new take on the old issue: ZRAM can currently be used only with zsmalloc even though this may not be the optimal combination for some configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and is notable for the heated discussions it has caused.

The patchset in [1] had basically the only goal of enabling ZRAM/zbud combo which had a very narrow use case. Things have changed substantially since then, and now, with z3fold used widely as a zswap backend, I, as the z3fold maintainer, am getting requests to re-interate on making it possible to use ZRAM with any zpool-compatible backend, first of all z3fold.

The preliminary results for this work have been delivered at Linux Plumbers this year [2]. The talk at LPC, though having attracted limited interest, ended in a consensus to continue the work and pursue the goal of decoupling ZRAM from zsmalloc.

The current patchset has been stress tested on arm64 and x86_64 devices, including the Dell laptop I'm writing this message on now, not to mention several QEmu confugirations.

[1] https://lkml.org/lkml/2015/9/14/356
[2] https://linuxplumbersconf.org/event/4/contributions/551/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/3] zpool: extend API to match zsmalloc
  2019-10-10 20:04 [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Vitaly Wool
@ 2019-10-10 20:09 ` Vitaly Wool
  2019-10-18 11:23   ` Dan Streetman
  2019-10-10 20:11 ` [PATCH 2/3] zsmalloc: add compaction and huge class callbacks Vitaly Wool
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Vitaly Wool @ 2019-10-10 20:09 UTC (permalink / raw)
  To: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim
  Cc: Sergey Senozhatsky, LKML, Vlastimil Babka, Shakeel Butt,
	Henry Burns, Theodore Ts'o

This patch adds the following functions to the zpool API:
- zpool_compact()
- zpool_get_num_compacted()
- zpool_huge_class_size()

The first one triggers compaction for the underlying allocator, the
second retrieves the number of pages migrated due to compaction for
the whole time of this pool's existence and the third one returns
the huge class size.

This API extension is done to align zpool API with zsmalloc API.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>
---
 include/linux/zpool.h | 14 +++++++++++++-
 mm/zpool.c            | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+), 1 deletion(-)

diff --git a/include/linux/zpool.h b/include/linux/zpool.h
index 51bf43076165..31f0c1360569 100644
--- a/include/linux/zpool.h
+++ b/include/linux/zpool.h
@@ -61,8 +61,13 @@ void *zpool_map_handle(struct zpool *pool, unsigned long handle,
 
 void zpool_unmap_handle(struct zpool *pool, unsigned long handle);
 
+unsigned long zpool_compact(struct zpool *pool);
+
+unsigned long zpool_get_num_compacted(struct zpool *pool);
+
 u64 zpool_get_total_size(struct zpool *pool);
 
+size_t zpool_huge_class_size(struct zpool *zpool);
 
 /**
  * struct zpool_driver - driver implementation for zpool
@@ -75,7 +80,10 @@ u64 zpool_get_total_size(struct zpool *pool);
  * @shrink:	shrink the pool.
  * @map:	map a handle.
  * @unmap:	unmap a handle.
- * @total_size:	get total size of a pool.
+ * @compact:	try to run compaction over a pool
+ * @get_num_compacted:	get amount of compacted pages for a pool
+ * @total_size:	get total size of a pool
+ * @huge_class_size: huge class threshold for pool pages.
  *
  * This is created by a zpool implementation and registered
  * with zpool.
@@ -104,7 +112,11 @@ struct zpool_driver {
 				enum zpool_mapmode mm);
 	void (*unmap)(void *pool, unsigned long handle);
 
+	unsigned long (*compact)(void *pool);
+	unsigned long (*get_num_compacted)(void *pool);
+
 	u64 (*total_size)(void *pool);
+	size_t (*huge_class_size)(void *pool);
 };
 
 void zpool_register_driver(struct zpool_driver *driver);
diff --git a/mm/zpool.c b/mm/zpool.c
index 863669212070..55e69213c2eb 100644
--- a/mm/zpool.c
+++ b/mm/zpool.c
@@ -362,6 +362,30 @@ void zpool_unmap_handle(struct zpool *zpool, unsigned long handle)
 	zpool->driver->unmap(zpool->pool, handle);
 }
 
+ /**
+ * zpool_compact() - try to run compaction over zpool
+ * @pool       The zpool to compact
+ *
+ * Returns: the number of migrated pages
+ */
+unsigned long zpool_compact(struct zpool *zpool)
+{
+	return zpool->driver->compact ? zpool->driver->compact(zpool->pool) : 0;
+}
+
+
+/**
+ * zpool_get_num_compacted() - get the number of migrated/compacted pages
+ * @pool       The zpool to get compaction statistic for
+ *
+ * Returns: the total number of migrated pages for the pool
+ */
+unsigned long zpool_get_num_compacted(struct zpool *zpool)
+{
+	return zpool->driver->get_num_compacted ?
+		zpool->driver->get_num_compacted(zpool->pool) : 0;
+}
+
 /**
  * zpool_get_total_size() - The total size of the pool
  * @zpool:	The zpool to check
@@ -375,6 +399,18 @@ u64 zpool_get_total_size(struct zpool *zpool)
 	return zpool->driver->total_size(zpool->pool);
 }
 
+/**
+ * zpool_huge_class_size() - get size for the "huge" class
+ * @pool	The zpool to check
+ *
+ * Returns: size of the huge class
+ */
+size_t zpool_huge_class_size(struct zpool *zpool)
+{
+	return zpool->driver->huge_class_size ?
+		zpool->driver->huge_class_size(zpool->pool) : 0;
+}
+
 /**
  * zpool_evictable() - Test if zpool is potentially evictable
  * @zpool:	The zpool to test
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/3] zsmalloc: add compaction and huge class callbacks
  2019-10-10 20:04 [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Vitaly Wool
  2019-10-10 20:09 ` [PATCH 1/3] zpool: extend API to match zsmalloc Vitaly Wool
@ 2019-10-10 20:11 ` Vitaly Wool
  2019-10-14 10:38   ` Sergey Senozhatsky
  2019-10-10 20:20 ` [PATCH 3/3] zram: use common zpool interface Vitaly Wool
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 17+ messages in thread
From: Vitaly Wool @ 2019-10-10 20:11 UTC (permalink / raw)
  To: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim
  Cc: Sergey Senozhatsky, LKML, Vlastimil Babka, Shakeel Butt,
	Henry Burns, Theodore Ts'o

Add compaction callbacks for zpool compaction API extension.
Add huge_class_size callback too to be fully aligned.

With these in place, we can proceed with ZRAM modification
to use the universal (zpool) API. 

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>
---
 mm/zsmalloc.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 2b2b9aae8a3c..43f43272b998 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -437,11 +437,29 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
 	zs_unmap_object(pool, handle);
 }
 
+static unsigned long zs_zpool_compact(void *pool)
+{
+	return zs_compact(pool);
+}
+
+static unsigned long zs_zpool_get_compacted(void *pool)
+{
+	struct zs_pool_stats stats;
+
+	zs_pool_stats(pool, &stats);
+	return stats.pages_compacted;
+}
+
 static u64 zs_zpool_total_size(void *pool)
 {
 	return zs_get_total_pages(pool) << PAGE_SHIFT;
 }
 
+static size_t zs_zpool_huge_class_size(void *pool)
+{
+	return zs_huge_class_size(pool);
+}
+
 static struct zpool_driver zs_zpool_driver = {
 	.type =			  "zsmalloc",
 	.owner =		  THIS_MODULE,
@@ -453,6 +471,9 @@ static struct zpool_driver zs_zpool_driver = {
 	.map =			  zs_zpool_map,
 	.unmap =		  zs_zpool_unmap,
 	.total_size =		  zs_zpool_total_size,
+	.compact =		  zs_zpool_compact,
+	.get_num_compacted =	  zs_zpool_get_compacted,
+	.huge_class_size =	  zs_zpool_huge_class_size,
 };
 
 MODULE_ALIAS("zpool-zsmalloc");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/3] zram: use common zpool interface
  2019-10-10 20:04 [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Vitaly Wool
  2019-10-10 20:09 ` [PATCH 1/3] zpool: extend API to match zsmalloc Vitaly Wool
  2019-10-10 20:11 ` [PATCH 2/3] zsmalloc: add compaction and huge class callbacks Vitaly Wool
@ 2019-10-10 20:20 ` Vitaly Wool
  2019-10-14 10:47   ` Sergey Senozhatsky
  2019-10-14 10:33 ` [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Sergey Senozhatsky
  2019-10-14 16:41 ` Minchan Kim
  4 siblings, 1 reply; 17+ messages in thread
From: Vitaly Wool @ 2019-10-10 20:20 UTC (permalink / raw)
  To: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim
  Cc: Sergey Senozhatsky, LKML, Vlastimil Babka, Shakeel Butt,
	Henry Burns, Theodore Ts'o

Change ZRAM into using zpool API. This patch allows to use any
zpool compatible allocation backend with ZRAM. It is meant to make
no functional changes to ZRAM.

zpool-registered backend can be selected via the module parameter
or kernel boot string. 'zsmalloc' is taken by default.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>
---
 drivers/block/zram/Kconfig    |  3 ++-
 drivers/block/zram/zram_drv.c | 64 +++++++++++++++++++----------------
 drivers/block/zram/zram_drv.h |  4 +--
 3 files changed, 39 insertions(+), 32 deletions(-)

diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
index fe7a4b7d30cf..7248d5aa3468 100644
--- a/drivers/block/zram/Kconfig
+++ b/drivers/block/zram/Kconfig
@@ -1,8 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
 config ZRAM
 	tristate "Compressed RAM block device support"
-	depends on BLOCK && SYSFS && ZSMALLOC && CRYPTO
+	depends on BLOCK && SYSFS && CRYPTO
 	select CRYPTO_LZO
+	select ZPOOL
 	help
 	  Creates virtual block devices called /dev/zramX (X = 0, 1, ...).
 	  Pages written to these disks are compressed and stored in memory
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index d58a359a6622..881f10f99a5d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -43,6 +43,9 @@ static DEFINE_MUTEX(zram_index_mutex);
 static int zram_major;
 static const char *default_compressor = "lzo-rle";
 
+#define BACKEND_PAR_BUF_SIZE	32
+static char backend_par_buf[BACKEND_PAR_BUF_SIZE];
+
 /* Module params (documentation at end) */
 static unsigned int num_devices = 1;
 /*
@@ -277,7 +280,7 @@ static ssize_t mem_used_max_store(struct device *dev,
 	down_read(&zram->init_lock);
 	if (init_done(zram)) {
 		atomic_long_set(&zram->stats.max_used_pages,
-				zs_get_total_pages(zram->mem_pool));
+			zpool_get_total_size(zram->mem_pool) >> PAGE_SHIFT);
 	}
 	up_read(&zram->init_lock);
 
@@ -1020,7 +1023,7 @@ static ssize_t compact_store(struct device *dev,
 		return -EINVAL;
 	}
 
-	zs_compact(zram->mem_pool);
+	zpool_compact(zram->mem_pool);
 	up_read(&zram->init_lock);
 
 	return len;
@@ -1048,17 +1051,14 @@ static ssize_t mm_stat_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
 {
 	struct zram *zram = dev_to_zram(dev);
-	struct zs_pool_stats pool_stats;
 	u64 orig_size, mem_used = 0;
-	long max_used;
+	long max_used, num_compacted = 0;
 	ssize_t ret;
 
-	memset(&pool_stats, 0x00, sizeof(struct zs_pool_stats));
-
 	down_read(&zram->init_lock);
 	if (init_done(zram)) {
-		mem_used = zs_get_total_pages(zram->mem_pool);
-		zs_pool_stats(zram->mem_pool, &pool_stats);
+		mem_used = zpool_get_total_size(zram->mem_pool);
+		num_compacted = zpool_get_num_compacted(zram->mem_pool);
 	}
 
 	orig_size = atomic64_read(&zram->stats.pages_stored);
@@ -1068,11 +1068,11 @@ static ssize_t mm_stat_show(struct device *dev,
 			"%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu\n",
 			orig_size << PAGE_SHIFT,
 			(u64)atomic64_read(&zram->stats.compr_data_size),
-			mem_used << PAGE_SHIFT,
+			mem_used,
 			zram->limit_pages << PAGE_SHIFT,
 			max_used << PAGE_SHIFT,
 			(u64)atomic64_read(&zram->stats.same_pages),
-			pool_stats.pages_compacted,
+			num_compacted,
 			(u64)atomic64_read(&zram->stats.huge_pages));
 	up_read(&zram->init_lock);
 
@@ -1133,27 +1133,30 @@ static void zram_meta_free(struct zram *zram, u64 disksize)
 	for (index = 0; index < num_pages; index++)
 		zram_free_page(zram, index);
 
-	zs_destroy_pool(zram->mem_pool);
+	zpool_destroy_pool(zram->mem_pool);
 	vfree(zram->table);
 }
 
 static bool zram_meta_alloc(struct zram *zram, u64 disksize)
 {
 	size_t num_pages;
+	char *backend;
 
 	num_pages = disksize >> PAGE_SHIFT;
 	zram->table = vzalloc(array_size(num_pages, sizeof(*zram->table)));
 	if (!zram->table)
 		return false;
 
-	zram->mem_pool = zs_create_pool(zram->disk->disk_name);
+	backend = strlen(backend_par_buf) ? backend_par_buf : "zsmalloc";
+	zram->mem_pool = zpool_create_pool(backend, zram->disk->disk_name,
+					GFP_NOIO, NULL);
 	if (!zram->mem_pool) {
 		vfree(zram->table);
 		return false;
 	}
 
 	if (!huge_class_size)
-		huge_class_size = zs_huge_class_size(zram->mem_pool);
+		huge_class_size = zpool_huge_class_size(zram->mem_pool);
 	return true;
 }
 
@@ -1197,7 +1200,7 @@ static void zram_free_page(struct zram *zram, size_t index)
 	if (!handle)
 		return;
 
-	zs_free(zram->mem_pool, handle);
+	zpool_free(zram->mem_pool, handle);
 
 	atomic64_sub(zram_get_obj_size(zram, index),
 			&zram->stats.compr_data_size);
@@ -1246,7 +1249,7 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
 
 	size = zram_get_obj_size(zram, index);
 
-	src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
+	src = zpool_map_handle(zram->mem_pool, handle, ZPOOL_MM_RO);
 	if (size == PAGE_SIZE) {
 		dst = kmap_atomic(page);
 		memcpy(dst, src, PAGE_SIZE);
@@ -1260,7 +1263,7 @@ static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
 		kunmap_atomic(dst);
 		zcomp_stream_put(zram->comp);
 	}
-	zs_unmap_object(zram->mem_pool, handle);
+	zpool_unmap_handle(zram->mem_pool, handle);
 	zram_slot_unlock(zram, index);
 
 	/* Should NEVER happen. Return bio error if it does. */
@@ -1335,7 +1338,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
 	if (unlikely(ret)) {
 		zcomp_stream_put(zram->comp);
 		pr_err("Compression failed! err=%d\n", ret);
-		zs_free(zram->mem_pool, handle);
+		zpool_free(zram->mem_pool, handle);
 		return ret;
 	}
 
@@ -1354,33 +1357,34 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
 	 * if we have a 'non-null' handle here then we are coming
 	 * from the slow path and handle has already been allocated.
 	 */
-	if (!handle)
-		handle = zs_malloc(zram->mem_pool, comp_len,
+	if (handle == 0)
+		ret = zpool_malloc(zram->mem_pool, comp_len,
 				__GFP_KSWAPD_RECLAIM |
 				__GFP_NOWARN |
 				__GFP_HIGHMEM |
-				__GFP_MOVABLE);
-	if (!handle) {
+				__GFP_MOVABLE,
+				&handle);
+	if (ret) {
 		zcomp_stream_put(zram->comp);
 		atomic64_inc(&zram->stats.writestall);
-		handle = zs_malloc(zram->mem_pool, comp_len,
-				GFP_NOIO | __GFP_HIGHMEM |
-				__GFP_MOVABLE);
-		if (handle)
+		ret = zpool_malloc(zram->mem_pool, comp_len,
+				GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE,
+				&handle);
+		if (ret == 0)
 			goto compress_again;
 		return -ENOMEM;
 	}
 
-	alloced_pages = zs_get_total_pages(zram->mem_pool);
+	alloced_pages = zpool_get_total_size(zram->mem_pool) >> PAGE_SHIFT;
 	update_used_max(zram, alloced_pages);
 
 	if (zram->limit_pages && alloced_pages > zram->limit_pages) {
 		zcomp_stream_put(zram->comp);
-		zs_free(zram->mem_pool, handle);
+		zpool_free(zram->mem_pool, handle);
 		return -ENOMEM;
 	}
 
-	dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO);
+	dst = zpool_map_handle(zram->mem_pool, handle, ZPOOL_MM_WO);
 
 	src = zstrm->buffer;
 	if (comp_len == PAGE_SIZE)
@@ -1390,7 +1394,7 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
 		kunmap_atomic(src);
 
 	zcomp_stream_put(zram->comp);
-	zs_unmap_object(zram->mem_pool, handle);
+	zpool_unmap_handle(zram->mem_pool, handle);
 	atomic64_add(comp_len, &zram->stats.compr_data_size);
 out:
 	/*
@@ -2136,6 +2140,8 @@ module_exit(zram_exit);
 
 module_param(num_devices, uint, 0);
 MODULE_PARM_DESC(num_devices, "Number of pre-created zram devices");
+module_param_string(backend, backend_par_buf, BACKEND_PAR_BUF_SIZE, S_IRUGO);
+MODULE_PARM_DESC(backend, "Compression storage (backend) name");
 
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_AUTHOR("Nitin Gupta <ngupta@vflare.org>");
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index f2fd46daa760..f4f51c6489ba 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -16,7 +16,7 @@
 #define _ZRAM_DRV_H_
 
 #include <linux/rwsem.h>
-#include <linux/zsmalloc.h>
+#include <linux/zpool.h>
 #include <linux/crypto.h>
 
 #include "zcomp.h"
@@ -91,7 +91,7 @@ struct zram_stats {
 
 struct zram {
 	struct zram_table_entry *table;
-	struct zs_pool *mem_pool;
+	struct zpool *mem_pool;
 	struct zcomp *comp;
 	struct gendisk *disk;
 	/* Prevent concurrent execution of device init */
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-10 20:04 [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Vitaly Wool
                   ` (2 preceding siblings ...)
  2019-10-10 20:20 ` [PATCH 3/3] zram: use common zpool interface Vitaly Wool
@ 2019-10-14 10:33 ` Sergey Senozhatsky
  2019-10-14 11:49   ` Vitaly Wool
  2019-10-14 16:41 ` Minchan Kim
  4 siblings, 1 reply; 17+ messages in thread
From: Sergey Senozhatsky @ 2019-10-14 10:33 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim,
	Sergey Senozhatsky, LKML, Vlastimil Babka, Shakeel Butt,
	Henry Burns, Theodore Ts'o

Hi,

On (10/10/19 23:04), Vitaly Wool wrote:
[..]
> The coming patchset is a new take on the old issue: ZRAM can
> currently be used only with zsmalloc even though this may not
> be the optimal combination for some configurations. The previous
> (unsuccessful) attempt dates back to 2015 [1] and is notable for
> the heated discussions it has caused.

Oh, right, I do recall it.

> The patchset in [1] had basically the only goal of enabling
> ZRAM/zbud combo which had a very narrow use case. Things have
> changed substantially since then, and now, with z3fold used
> widely as a zswap backend, I, as the z3fold maintainer, am
> getting requests to re-interate on making it possible to use
> ZRAM with any zpool-compatible backend, first of all z3fold.

A quick question, what are the technical reasons to prefer
allocator X over zsmalloc? Some data would help, I guess.

> The preliminary results for this work have been delivered at
> Linux Plumbers this year [2]. The talk at LPC, though having
> attracted limited interest, ended in a consensus to continue
> the work and pursue the goal of decoupling ZRAM from zsmalloc.

[..]

> [1] https://lkml.org/lkml/2015/9/14/356

I need to re-read it, thanks for the link. IIRC, but maybe
I'm wrong, one of the things Minchan was not happy with was
increased maintenance cost. So, perhaps, this also should
be discuss/addressed (and maybe even in the first place).

	-ss

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] zsmalloc: add compaction and huge class callbacks
  2019-10-10 20:11 ` [PATCH 2/3] zsmalloc: add compaction and huge class callbacks Vitaly Wool
@ 2019-10-14 10:38   ` Sergey Senozhatsky
  0 siblings, 0 replies; 17+ messages in thread
From: Sergey Senozhatsky @ 2019-10-14 10:38 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim,
	Sergey Senozhatsky, LKML, Vlastimil Babka, Shakeel Butt,
	Henry Burns, Theodore Ts'o

On (10/10/19 23:11), Vitaly Wool wrote:
[..]
> +static unsigned long zs_zpool_get_compacted(void *pool)
> +{
> +	struct zs_pool_stats stats;
> +
> +	zs_pool_stats(pool, &stats);
> +	return stats.pages_compacted;
> +}

So zs_pool_stats() can become static?

	-ss

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] zram: use common zpool interface
  2019-10-10 20:20 ` [PATCH 3/3] zram: use common zpool interface Vitaly Wool
@ 2019-10-14 10:47   ` Sergey Senozhatsky
  2019-10-14 11:52     ` Vitaly Wool
  0 siblings, 1 reply; 17+ messages in thread
From: Sergey Senozhatsky @ 2019-10-14 10:47 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim,
	Sergey Senozhatsky, LKML, Vlastimil Babka, Shakeel Butt,
	Henry Burns, Theodore Ts'o

On (10/10/19 23:20), Vitaly Wool wrote:
[..]
>  static const char *default_compressor = "lzo-rle";
>  
> +#define BACKEND_PAR_BUF_SIZE	32
> +static char backend_par_buf[BACKEND_PAR_BUF_SIZE];

We can have multiple zram devices (zram0 .. zramN), I guess it
would make sense not to force all devices to use one particular
allocator (e.g. see comp_algorithm_store()).

If the motivation for the patch set is that zsmalloc does not
perform equally well for various data access patterns, then the
same is true for any other allocator. Thus, I think, we need to
have a per-device 'allocator' knob.

	-ss

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-14 10:33 ` [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Sergey Senozhatsky
@ 2019-10-14 11:49   ` Vitaly Wool
  0 siblings, 0 replies; 17+ messages in thread
From: Vitaly Wool @ 2019-10-14 11:49 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

Hi Sergey,

On Mon, Oct 14, 2019 at 12:35 PM Sergey Senozhatsky
<sergey.senozhatsky.work@gmail.com> wrote:
>
> Hi,
>
> On (10/10/19 23:04), Vitaly Wool wrote:
> [..]
> > The coming patchset is a new take on the old issue: ZRAM can
> > currently be used only with zsmalloc even though this may not
> > be the optimal combination for some configurations. The previous
> > (unsuccessful) attempt dates back to 2015 [1] and is notable for
> > the heated discussions it has caused.
>
> Oh, right, I do recall it.
>
> > The patchset in [1] had basically the only goal of enabling
> > ZRAM/zbud combo which had a very narrow use case. Things have
> > changed substantially since then, and now, with z3fold used
> > widely as a zswap backend, I, as the z3fold maintainer, am
> > getting requests to re-interate on making it possible to use
> > ZRAM with any zpool-compatible backend, first of all z3fold.
>
> A quick question, what are the technical reasons to prefer
> allocator X over zsmalloc? Some data would help, I guess.

For z3fold, the data can be found here:
https://elinux.org/images/d/d3/Z3fold.pdf.

For zbud (which is also of interest), imagine a low-end platform with
a simplistic HW compressor that doesn't give really high ratio. We
still want to be able to use ZRAM (not necessarily as a swap
partition, but rather for /home and /var) but we absolutely don't need
zsmalloc's complexity. zbud is a perfect match here (provided that it
can cope with PAGE_SIZE pages, yes, but it's a small patch to make
that work) since it's unlikely that we squeeze more than 2 compressed
pages per page with that HW compressor anyway.

> > The preliminary results for this work have been delivered at
> > Linux Plumbers this year [2]. The talk at LPC, though having
> > attracted limited interest, ended in a consensus to continue
> > the work and pursue the goal of decoupling ZRAM from zsmalloc.
>
> [..]
>
> > [1] https://lkml.org/lkml/2015/9/14/356
>
> I need to re-read it, thanks for the link. IIRC, but maybe
> I'm wrong, one of the things Minchan was not happy with was
> increased maintenance cost. So, perhaps, this also should
> be discuss/addressed (and maybe even in the first place).

I have hard time seeing how maintenance cost is increased here :)

~Vitaly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] zram: use common zpool interface
  2019-10-14 10:47   ` Sergey Senozhatsky
@ 2019-10-14 11:52     ` Vitaly Wool
  2019-10-15  2:04       ` Sergey Senozhatsky
  0 siblings, 1 reply; 17+ messages in thread
From: Vitaly Wool @ 2019-10-14 11:52 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Minchan Kim, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

On Mon, Oct 14, 2019 at 12:49 PM Sergey Senozhatsky
<sergey.senozhatsky.work@gmail.com> wrote:
>
> On (10/10/19 23:20), Vitaly Wool wrote:
> [..]
> >  static const char *default_compressor = "lzo-rle";
> >
> > +#define BACKEND_PAR_BUF_SIZE 32
> > +static char backend_par_buf[BACKEND_PAR_BUF_SIZE];
>
> We can have multiple zram devices (zram0 .. zramN), I guess it
> would make sense not to force all devices to use one particular
> allocator (e.g. see comp_algorithm_store()).
>
> If the motivation for the patch set is that zsmalloc does not
> perform equally well for various data access patterns, then the
> same is true for any other allocator. Thus, I think, we need to
> have a per-device 'allocator' knob.

We were thinking here in per-SoC terms basically, but this is a valid
point. Since zram has a well-established sysfs per-device
configuration interface, backend choice better be moved there. Agree?

~Vitaly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-10 20:04 [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Vitaly Wool
                   ` (3 preceding siblings ...)
  2019-10-14 10:33 ` [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Sergey Senozhatsky
@ 2019-10-14 16:41 ` Minchan Kim
  2019-10-15  7:39   ` Vitaly Wool
  4 siblings, 1 reply; 17+ messages in thread
From: Minchan Kim @ 2019-10-14 16:41 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Sergey Senozhatsky, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly Wool wrote:
> The coming patchset is a new take on the old issue: ZRAM can currently be used only with zsmalloc even though this may not be the optimal combination for some configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and is notable for the heated discussions it has caused.
> 
> The patchset in [1] had basically the only goal of enabling ZRAM/zbud combo which had a very narrow use case. Things have changed substantially since then, and now, with z3fold used widely as a zswap backend, I, as the z3fold maintainer, am getting requests to re-interate on making it possible to use ZRAM with any zpool-compatible backend, first of all z3fold.
> 
> The preliminary results for this work have been delivered at Linux Plumbers this year [2]. The talk at LPC, though having attracted limited interest, ended in a consensus to continue the work and pursue the goal of decoupling ZRAM from zsmalloc.
> 
> The current patchset has been stress tested on arm64 and x86_64 devices, including the Dell laptop I'm writing this message on now, not to mention several QEmu confugirations.
> 
> [1] https://lkml.org/lkml/2015/9/14/356
> [2] https://linuxplumbersconf.org/event/4/contributions/551/

Please describe what's the usecase in real world, what's the benefit zsmalloc
cannot fulfill by desgin and how it's significant.
I really don't want to make fragmentaion of allocator so we should really see
how zsmalloc cannot achieve things if you are claiming.
Please tell us how to test it so that we could investigate what's the root
cause.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] zram: use common zpool interface
  2019-10-14 11:52     ` Vitaly Wool
@ 2019-10-15  2:04       ` Sergey Senozhatsky
  0 siblings, 0 replies; 17+ messages in thread
From: Sergey Senozhatsky @ 2019-10-15  2:04 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Sergey Senozhatsky, Linux-MM, Andrew Morton, Dan Streetman,
	Minchan Kim, LKML, Vlastimil Babka, Shakeel Butt, Henry Burns,
	Theodore Ts'o

On (10/14/19 13:52), Vitaly Wool wrote:
> On Mon, Oct 14, 2019 at 12:49 PM Sergey Senozhatsky
> <sergey.senozhatsky.work@gmail.com> wrote:
> >
> > On (10/10/19 23:20), Vitaly Wool wrote:
> > [..]
> > >  static const char *default_compressor = "lzo-rle";
> > >
> > > +#define BACKEND_PAR_BUF_SIZE 32
> > > +static char backend_par_buf[BACKEND_PAR_BUF_SIZE];
> >
> > We can have multiple zram devices (zram0 .. zramN), I guess it
> > would make sense not to force all devices to use one particular
> > allocator (e.g. see comp_algorithm_store()).
> >
> > If the motivation for the patch set is that zsmalloc does not
> > perform equally well for various data access patterns, then the
> > same is true for any other allocator. Thus, I think, we need to
> > have a per-device 'allocator' knob.
> 
> We were thinking here in per-SoC terms basically, but this is a valid
> point. Since zram has a well-established sysfs per-device
> configuration interface, backend choice better be moved there. Agree?

Yup, sysfs per-device knob.

// Given that Minchan is OK with the patch set.

	-ss

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-14 16:41 ` Minchan Kim
@ 2019-10-15  7:39   ` Vitaly Wool
  2019-10-15 20:00     ` Minchan Kim
  0 siblings, 1 reply; 17+ messages in thread
From: Vitaly Wool @ 2019-10-15  7:39 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Sergey Senozhatsky, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

Hi Minchan,

On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim <minchan@kernel.org> wrote:
>
> On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly Wool wrote:
> > The coming patchset is a new take on the old issue: ZRAM can currently be used only with zsmalloc even though this may not be the optimal combination for some configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and is notable for the heated discussions it has caused.
> >
> > The patchset in [1] had basically the only goal of enabling ZRAM/zbud combo which had a very narrow use case. Things have changed substantially since then, and now, with z3fold used widely as a zswap backend, I, as the z3fold maintainer, am getting requests to re-interate on making it possible to use ZRAM with any zpool-compatible backend, first of all z3fold.
> >
> > The preliminary results for this work have been delivered at Linux Plumbers this year [2]. The talk at LPC, though having attracted limited interest, ended in a consensus to continue the work and pursue the goal of decoupling ZRAM from zsmalloc.
> >
> > The current patchset has been stress tested on arm64 and x86_64 devices, including the Dell laptop I'm writing this message on now, not to mention several QEmu confugirations.
> >
> > [1] https://lkml.org/lkml/2015/9/14/356
> > [2] https://linuxplumbersconf.org/event/4/contributions/551/
>
> Please describe what's the usecase in real world, what's the benefit zsmalloc
> cannot fulfill by desgin and how it's significant.

I'm not entirely sure how to interpret the phrase "the benefit
zsmalloc cannot fulfill by design" but let me explain.
First, there are multi multi core systems where z3fold can provide
better throughput.
Then, there are low end systems with hardware
compression/decompression support which don't need zsmalloc
sophistication and would rather use zbud with ZRAM because the
compression ratio is relatively low.
Finally, there are MMU-less systems targeting IOT and still running
Linux and having a compressed RAM disk is something that would help
these systems operate in a better way (for the benefit of the overall
Linux ecosystem, if you care about that, of course; well, some people
do).

> I really don't want to make fragmentaion of allocator so we should really see
> how zsmalloc cannot achieve things if you are claiming.

I have to say that this point is completely bogus. We do not create
fragmentation by using a better defined and standardized API. In fact,
we aim to increase the number of use cases and test coverage for ZRAM.
With that said, I have hard time seeing how zsmalloc can operate on a
MMU-less system.

> Please tell us how to test it so that we could investigate what's the root
> cause.

I gather you haven't read neither the LPC documents nor my
conversation with Sergey re: these changes, because if you did you
wouldn't have had the type of questions you're asking. Please also see
above.

I feel a bit awkward explaining basic things to you but there may not
be other "root cause" than applicability issue. zsmalloc is a great
allocator but it's not universal and has its limitations. The
(potential) scope for ZRAM is wider than zsmalloc can provide. We are
*helping* _you_ to extend this scope "in real world" (c) and you come
up with bogus objections. Why?

Best regards,
   Vitaly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-15  7:39   ` Vitaly Wool
@ 2019-10-15 20:00     ` Minchan Kim
  2019-10-21 14:21       ` Vitaly Wool
  0 siblings, 1 reply; 17+ messages in thread
From: Minchan Kim @ 2019-10-15 20:00 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Sergey Senozhatsky, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

On Tue, Oct 15, 2019 at 09:39:35AM +0200, Vitaly Wool wrote:
> Hi Minchan,
> 
> On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim <minchan@kernel.org> wrote:
> >
> > On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly Wool wrote:
> > > The coming patchset is a new take on the old issue: ZRAM can currently be used only with zsmalloc even though this may not be the optimal combination for some configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and is notable for the heated discussions it has caused.
> > >
> > > The patchset in [1] had basically the only goal of enabling ZRAM/zbud combo which had a very narrow use case. Things have changed substantially since then, and now, with z3fold used widely as a zswap backend, I, as the z3fold maintainer, am getting requests to re-interate on making it possible to use ZRAM with any zpool-compatible backend, first of all z3fold.
> > >
> > > The preliminary results for this work have been delivered at Linux Plumbers this year [2]. The talk at LPC, though having attracted limited interest, ended in a consensus to continue the work and pursue the goal of decoupling ZRAM from zsmalloc.
> > >
> > > The current patchset has been stress tested on arm64 and x86_64 devices, including the Dell laptop I'm writing this message on now, not to mention several QEmu confugirations.
> > >
> > > [1] https://lkml.org/lkml/2015/9/14/356
> > > [2] https://linuxplumbersconf.org/event/4/contributions/551/
> >
> > Please describe what's the usecase in real world, what's the benefit zsmalloc
> > cannot fulfill by desgin and how it's significant.
> 
> I'm not entirely sure how to interpret the phrase "the benefit
> zsmalloc cannot fulfill by design" but let me explain.
> First, there are multi multi core systems where z3fold can provide
> better throughput.

Please include number in the description with workload.

> Then, there are low end systems with hardware
> compression/decompression support which don't need zsmalloc
> sophistication and would rather use zbud with ZRAM because the
> compression ratio is relatively low.

I couldn't imagine how it's bad with zsmalloc. Could you be more
specific?

> Finally, there are MMU-less systems targeting IOT and still running
> Linux and having a compressed RAM disk is something that would help
> these systems operate in a better way (for the benefit of the overall
> Linux ecosystem, if you care about that, of course; well, some people
> do).

Could you write down what's the problem to use zsmalloc for MMU-less
system? Maybe, it would be important point rather other performance
argument since other functions's overheads in the callpath are already
rather big.

> 
> > I really don't want to make fragmentaion of allocator so we should really see
> > how zsmalloc cannot achieve things if you are claiming.
> 
> I have to say that this point is completely bogus. We do not create
> fragmentation by using a better defined and standardized API. In fact,
> we aim to increase the number of use cases and test coverage for ZRAM.
> With that said, I have hard time seeing how zsmalloc can operate on a
> MMU-less system.
> 
> > Please tell us how to test it so that we could investigate what's the root
> > cause.
> 
> I gather you haven't read neither the LPC documents nor my
> conversation with Sergey re: these changes, because if you did you
> wouldn't have had the type of questions you're asking. Please also see
> above.

Please include your claims in the description rather than attaching
file. That's the usualy way how we work because it could make easier to
discuss by inline.

> 
> I feel a bit awkward explaining basic things to you but there may not
> be other "root cause" than applicability issue. zsmalloc is a great
> allocator but it's not universal and has its limitations. The
> (potential) scope for ZRAM is wider than zsmalloc can provide. We are
> *helping* _you_ to extend this scope "in real world" (c) and you come
> up with bogus objections. Why?

Please add more detail to convince so we need to think over why zsmalloc
cannot be improved for the usecase.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] zpool: extend API to match zsmalloc
  2019-10-10 20:09 ` [PATCH 1/3] zpool: extend API to match zsmalloc Vitaly Wool
@ 2019-10-18 11:23   ` Dan Streetman
  0 siblings, 0 replies; 17+ messages in thread
From: Dan Streetman @ 2019-10-18 11:23 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Linux-MM, Andrew Morton, Minchan Kim, Sergey Senozhatsky, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

On Thu, Oct 10, 2019 at 4:09 PM Vitaly Wool <vitalywool@gmail.com> wrote:
>
> This patch adds the following functions to the zpool API:
> - zpool_compact()
> - zpool_get_num_compacted()
> - zpool_huge_class_size()
>
> The first one triggers compaction for the underlying allocator, the
> second retrieves the number of pages migrated due to compaction for
> the whole time of this pool's existence and the third one returns
> the huge class size.
>
> This API extension is done to align zpool API with zsmalloc API.
>
> Signed-off-by: Vitaly Wool <vitalywool@gmail.com>

Seems reasonable to me.

Reviewed-by: Dan Streetman <ddstreet@ieee.org>

> ---
>  include/linux/zpool.h | 14 +++++++++++++-
>  mm/zpool.c            | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 49 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/zpool.h b/include/linux/zpool.h
> index 51bf43076165..31f0c1360569 100644
> --- a/include/linux/zpool.h
> +++ b/include/linux/zpool.h
> @@ -61,8 +61,13 @@ void *zpool_map_handle(struct zpool *pool, unsigned long handle,
>
>  void zpool_unmap_handle(struct zpool *pool, unsigned long handle);
>
> +unsigned long zpool_compact(struct zpool *pool);
> +
> +unsigned long zpool_get_num_compacted(struct zpool *pool);
> +
>  u64 zpool_get_total_size(struct zpool *pool);
>
> +size_t zpool_huge_class_size(struct zpool *zpool);
>
>  /**
>   * struct zpool_driver - driver implementation for zpool
> @@ -75,7 +80,10 @@ u64 zpool_get_total_size(struct zpool *pool);
>   * @shrink:    shrink the pool.
>   * @map:       map a handle.
>   * @unmap:     unmap a handle.
> - * @total_size:        get total size of a pool.
> + * @compact:   try to run compaction over a pool
> + * @get_num_compacted: get amount of compacted pages for a pool
> + * @total_size:        get total size of a pool
> + * @huge_class_size: huge class threshold for pool pages.
>   *
>   * This is created by a zpool implementation and registered
>   * with zpool.
> @@ -104,7 +112,11 @@ struct zpool_driver {
>                                 enum zpool_mapmode mm);
>         void (*unmap)(void *pool, unsigned long handle);
>
> +       unsigned long (*compact)(void *pool);
> +       unsigned long (*get_num_compacted)(void *pool);
> +
>         u64 (*total_size)(void *pool);
> +       size_t (*huge_class_size)(void *pool);
>  };
>
>  void zpool_register_driver(struct zpool_driver *driver);
> diff --git a/mm/zpool.c b/mm/zpool.c
> index 863669212070..55e69213c2eb 100644
> --- a/mm/zpool.c
> +++ b/mm/zpool.c
> @@ -362,6 +362,30 @@ void zpool_unmap_handle(struct zpool *zpool, unsigned long handle)
>         zpool->driver->unmap(zpool->pool, handle);
>  }
>
> + /**
> + * zpool_compact() - try to run compaction over zpool
> + * @pool       The zpool to compact
> + *
> + * Returns: the number of migrated pages
> + */
> +unsigned long zpool_compact(struct zpool *zpool)
> +{
> +       return zpool->driver->compact ? zpool->driver->compact(zpool->pool) : 0;
> +}
> +
> +
> +/**
> + * zpool_get_num_compacted() - get the number of migrated/compacted pages
> + * @pool       The zpool to get compaction statistic for
> + *
> + * Returns: the total number of migrated pages for the pool
> + */
> +unsigned long zpool_get_num_compacted(struct zpool *zpool)
> +{
> +       return zpool->driver->get_num_compacted ?
> +               zpool->driver->get_num_compacted(zpool->pool) : 0;
> +}
> +
>  /**
>   * zpool_get_total_size() - The total size of the pool
>   * @zpool:     The zpool to check
> @@ -375,6 +399,18 @@ u64 zpool_get_total_size(struct zpool *zpool)
>         return zpool->driver->total_size(zpool->pool);
>  }
>
> +/**
> + * zpool_huge_class_size() - get size for the "huge" class
> + * @pool       The zpool to check
> + *
> + * Returns: size of the huge class
> + */
> +size_t zpool_huge_class_size(struct zpool *zpool)
> +{
> +       return zpool->driver->huge_class_size ?
> +               zpool->driver->huge_class_size(zpool->pool) : 0;
> +}
> +
>  /**
>   * zpool_evictable() - Test if zpool is potentially evictable
>   * @zpool:     The zpool to test
> --
> 2.20.1

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-15 20:00     ` Minchan Kim
@ 2019-10-21 14:21       ` Vitaly Wool
  2019-10-30  0:10         ` Minchan Kim
  0 siblings, 1 reply; 17+ messages in thread
From: Vitaly Wool @ 2019-10-21 14:21 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Sergey Senozhatsky, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

On Tue, Oct 15, 2019 at 10:00 PM Minchan Kim <minchan@kernel.org> wrote:
>
> On Tue, Oct 15, 2019 at 09:39:35AM +0200, Vitaly Wool wrote:
> > Hi Minchan,
> >
> > On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim <minchan@kernel.org> wrote:
> > >
> > > On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly Wool wrote:
> > > > The coming patchset is a new take on the old issue: ZRAM can currently be used only with zsmalloc even though this may not be the optimal combination for some configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and is notable for the heated discussions it has caused.
> > > >
> > > > The patchset in [1] had basically the only goal of enabling ZRAM/zbud combo which had a very narrow use case. Things have changed substantially since then, and now, with z3fold used widely as a zswap backend, I, as the z3fold maintainer, am getting requests to re-interate on making it possible to use ZRAM with any zpool-compatible backend, first of all z3fold.
> > > >
> > > > The preliminary results for this work have been delivered at Linux Plumbers this year [2]. The talk at LPC, though having attracted limited interest, ended in a consensus to continue the work and pursue the goal of decoupling ZRAM from zsmalloc.
> > > >
> > > > The current patchset has been stress tested on arm64 and x86_64 devices, including the Dell laptop I'm writing this message on now, not to mention several QEmu confugirations.
> > > >
> > > > [1] https://lkml.org/lkml/2015/9/14/356
> > > > [2] https://linuxplumbersconf.org/event/4/contributions/551/
> > >
> > > Please describe what's the usecase in real world, what's the benefit zsmalloc
> > > cannot fulfill by desgin and how it's significant.
> >
> > I'm not entirely sure how to interpret the phrase "the benefit
> > zsmalloc cannot fulfill by design" but let me explain.
> > First, there are multi multi core systems where z3fold can provide
> > better throughput.
>
> Please include number in the description with workload.

Sure. So on an HMP 8-core ARM64 system with ZRAM, we run the following command:
fio --bs=4k --randrepeat=1 --randseed=100 --refill_buffers \
    --buffer_compress_percentage=50 --scramble_buffers=1 \
    --direct=1 --loops=15 --numjobs=4 --filename=/dev/block/zram0 \
     --name=seq-write --rw=write --stonewall --name=seq-read \
     --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall \
     --name=rand-readwrite --rw=randrw --stonewall

The results are the following:

zsmalloc:
Run status group 0 (all jobs):
  WRITE: io=61440MB, aggrb=1680.4MB/s, minb=430167KB/s,
maxb=440590KB/s, mint=35699msec, maxt=36564msec

Run status group 1 (all jobs):
   READ: io=61440MB, aggrb=1620.4MB/s, minb=414817KB/s,
maxb=414850KB/s, mint=37914msec, maxt=37917msec

Run status group 2 (all jobs):
  READ: io=30615MB, aggrb=897979KB/s, minb=224494KB/s,
maxb=228161KB/s, mint=34351msec, maxt=34912msec
  WRITE: io=30825MB, aggrb=904110KB/s, minb=226027KB/s,
maxb=229718KB/s, mint=34351msec, maxt=34912msec

Run status group 3 (all jobs):
   READ: io=30615MB, aggrb=772002KB/s, minb=193000KB/s,
maxb=193010KB/s, mint=40607msec, maxt=40609msec
  WRITE: io=30825MB, aggrb=777273KB/s, minb=194318KB/s,
maxb=194327KB/s, mint=40607msec, maxt=40609msec

z3fold:
Run status group 0 (all jobs):
  WRITE: io=61440MB, aggrb=1224.8MB/s, minb=313525KB/s,
maxb=329941KB/s, mint=47671msec, maxt=50167msec

Run status group 1 (all jobs):
   READ: io=61440MB, aggrb=3119.3MB/s, minb=798529KB/s,
maxb=862883KB/s, mint=18228msec, maxt=19697msec

Run status group 2 (all jobs):
   READ: io=30615MB, aggrb=937283KB/s, minb=234320KB/s,
maxb=234334KB/s, mint=33446msec, maxt=33448msec
  WRITE: io=30825MB, aggrb=943682KB/s, minb=235920KB/s,
maxb=235934KB/s, mint=33446msec, maxt=33448msec

Run status group 3 (all jobs):
   READ: io=30615MB, aggrb=829591KB/s, minb=207397KB/s,
maxb=210285KB/s, mint=37271msec, maxt=37790msec
  WRITE: io=30825MB, aggrb=835255KB/s, minb=208813KB/s,
maxb=211721KB/s, mint=37271msec, maxt=37790msec

So, z3fold is faster everywhere (including being *two* times faster on
read) except for sequential write which is the least important use
case in real world.

> > Then, there are low end systems with hardware
> > compression/decompression support which don't need zsmalloc
> > sophistication and would rather use zbud with ZRAM because the
> > compression ratio is relatively low.
>
> I couldn't imagine how it's bad with zsmalloc. Could you be more
> specific?


> > Finally, there are MMU-less systems targeting IOT and still running
> > Linux and having a compressed RAM disk is something that would help
> > these systems operate in a better way (for the benefit of the overall
> > Linux ecosystem, if you care about that, of course; well, some people
> > do).
>
> Could you write down what's the problem to use zsmalloc for MMU-less
> system? Maybe, it would be important point rather other performance
> argument since other functions's overheads in the callpath are already
> rather big.

Well, I assume you had the reasons to make zsmalloc depend on MMU in Kconfig:
...
config ZSMALLOC
    tristate "Memory allocator for compressed pages"
    depends on MMU
    help
...

But even disregarding that, let's compare ZRAM/zbud and ZRAM/zsmalloc
performance and memory these two consume on a relatively low end
2-core ARM.
Command:
fio --bs=4k --randrepeat=1 --randseed=100 --refill_buffers
--scramble_buffers=1 \
        --direct=1 --loops=15 --numjobs=2 --filename=/dev/block/zram0 \
        --name=seq-write --rw=write --stonewall --name=seq-read --rw=read \
        --stonewall --name=seq-readwrite --rw=rw --stonewall
--name=rand-readwrite \
        --rw=randrw --stonewall

zsmalloc:
Run status group 0 (all jobs):
  WRITE: io=30720MB, aggrb=374763KB/s, minb=187381KB/s,
maxb=188389KB/s, mint=83490msec, maxt=83939msec

Run status group 1 (all jobs):
   READ: io=30720MB, aggrb=964000KB/s, minb=482000KB/s,
maxb=482015KB/s, mint=32631msec, maxt=32632msec

Run status group 2 (all jobs):
   READ: io=15308MB, aggrb=431263KB/s, minb=215631KB/s,
maxb=215898KB/s, mint=36302msec, maxt=36347msec
  WRITE: io=15412MB, aggrb=434207KB/s, minb=217103KB/s,
maxb=217373KB/s, mint=36302msec, maxt=36347msec

Run status group 3 (all jobs):
   READ: io=15308MB, aggrb=327328KB/s, minb=163664KB/s,
maxb=163667KB/s, mint=47887msec, maxt=47888msec
  WRITE: io=15412MB, aggrb=329563KB/s, minb=164781KB/s,
maxb=164785KB/s, mint=47887msec, maxt=47888msec

zbud:
Run status group 0 (all jobs):
  WRITE: io=30720MB, aggrb=735980KB/s, minb=367990KB/s,
maxb=373079KB/s, mint=42159msec, maxt=42742msec

Run status group 1 (all jobs):
   READ: io=30720MB, aggrb=927915KB/s, minb=463957KB/s,
maxb=463999KB/s, mint=33898msec, maxt=33901msec

Run status group 2 (all jobs):
   READ: io=15308MB, aggrb=403467KB/s, minb=201733KB/s,
maxb=202051KB/s, mint=38790msec, maxt=38851msec
  WRITE: io=15412MB, aggrb=406222KB/s, minb=203111KB/s,
maxb=203430KB/s, mint=38790msec, maxt=38851msec

Run status group 3 (all jobs):
   READ: io=15308MB, aggrb=334967KB/s, minb=167483KB/s,
maxb=167487KB/s, mint=46795msec, maxt=46796msec
  WRITE: io=15412MB, aggrb=337254KB/s, minb=168627KB/s,
maxb=168630KB/s, mint=46795msec, maxt=46796msec

Pretty equal except for sequential write which is twice as good with zbud.

Now to the fun part.
zsmalloc:
  0 .text         00002908  0000000000000000  0000000000000000  00000040  2**2
                  CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
zbud:
  0 .text         0000072c  0000000000000000  0000000000000000  00000040  2**2
                  CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE

And this does not cover dynamic memory allocation overhead which is
higher for zsmalloc. So once again, given that the compression ratio
is low (e. g. a simple HW accelerator is used), what would most
unbiased people prefer to use in this case?

> > > I really don't want to make fragmentaion of allocator so we should really see
> > > how zsmalloc cannot achieve things if you are claiming.
> >
> > I have to say that this point is completely bogus. We do not create
> > fragmentation by using a better defined and standardized API. In fact,
> > we aim to increase the number of use cases and test coverage for ZRAM.
> > With that said, I have hard time seeing how zsmalloc can operate on a
> > MMU-less system.
> >
> > > Please tell us how to test it so that we could investigate what's the root
> > > cause.
> >
> > I gather you haven't read neither the LPC documents nor my
> > conversation with Sergey re: these changes, because if you did you
> > wouldn't have had the type of questions you're asking. Please also see
> > above.
>
> Please include your claims in the description rather than attaching
> file. That's the usualy way how we work because it could make easier to
> discuss by inline.

Did I attach something? I don't quite recall that. I posted links to
previous discussions and conference materials, each for a reason.

> >
> > I feel a bit awkward explaining basic things to you but there may not
> > be other "root cause" than applicability issue. zsmalloc is a great
> > allocator but it's not universal and has its limitations. The
> > (potential) scope for ZRAM is wider than zsmalloc can provide. We are
> > *helping* _you_ to extend this scope "in real world" (c) and you come
> > up with bogus objections. Why?
>
> Please add more detail to convince so we need to think over why zsmalloc
> cannot be improved for the usecase.

This approach is wrong. zsmalloc is good enough and covers a lot of
use cases but there are still some where it doesn't work that well by
design. E. g. on an XIP system we do care about the code size since
it's stored uncompressed but still want to use ZRAM. Why would we want
to waste almost 10K just on zsmalloc code if the counterpart (zbud in
that case) works better?

Best regards,
   Vitaly

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-21 14:21       ` Vitaly Wool
@ 2019-10-30  0:10         ` Minchan Kim
  2019-11-13 15:54           ` Vitaly Wool
  0 siblings, 1 reply; 17+ messages in thread
From: Minchan Kim @ 2019-10-30  0:10 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Sergey Senozhatsky, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

On Mon, Oct 21, 2019 at 04:21:21PM +0200, Vitaly Wool wrote:
> On Tue, Oct 15, 2019 at 10:00 PM Minchan Kim <minchan@kernel.org> wrote:
> >
> > On Tue, Oct 15, 2019 at 09:39:35AM +0200, Vitaly Wool wrote:
> > > Hi Minchan,
> > >
> > > On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim <minchan@kernel.org> wrote:
> > > >
> > > > On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly Wool wrote:
> > > > > The coming patchset is a new take on the old issue: ZRAM can currently be used only with zsmalloc even though this may not be the optimal combination for some configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and is notable for the heated discussions it has caused.
> > > > >
> > > > > The patchset in [1] had basically the only goal of enabling ZRAM/zbud combo which had a very narrow use case. Things have changed substantially since then, and now, with z3fold used widely as a zswap backend, I, as the z3fold maintainer, am getting requests to re-interate on making it possible to use ZRAM with any zpool-compatible backend, first of all z3fold.
> > > > >
> > > > > The preliminary results for this work have been delivered at Linux Plumbers this year [2]. The talk at LPC, though having attracted limited interest, ended in a consensus to continue the work and pursue the goal of decoupling ZRAM from zsmalloc.
> > > > >
> > > > > The current patchset has been stress tested on arm64 and x86_64 devices, including the Dell laptop I'm writing this message on now, not to mention several QEmu confugirations.
> > > > >
> > > > > [1] https://lkml.org/lkml/2015/9/14/356
> > > > > [2] https://linuxplumbersconf.org/event/4/contributions/551/
> > > >
> > > > Please describe what's the usecase in real world, what's the benefit zsmalloc
> > > > cannot fulfill by desgin and how it's significant.
> > >
> > > I'm not entirely sure how to interpret the phrase "the benefit
> > > zsmalloc cannot fulfill by design" but let me explain.
> > > First, there are multi multi core systems where z3fold can provide
> > > better throughput.
> >
> > Please include number in the description with workload.
> 
> Sure. So on an HMP 8-core ARM64 system with ZRAM, we run the following command:
> fio --bs=4k --randrepeat=1 --randseed=100 --refill_buffers \
>     --buffer_compress_percentage=50 --scramble_buffers=1 \
>     --direct=1 --loops=15 --numjobs=4 --filename=/dev/block/zram0 \
>      --name=seq-write --rw=write --stonewall --name=seq-read \
>      --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall \
>      --name=rand-readwrite --rw=randrw --stonewall
> 
> The results are the following:
> 
> zsmalloc:
> Run status group 0 (all jobs):
>   WRITE: io=61440MB, aggrb=1680.4MB/s, minb=430167KB/s,
> maxb=440590KB/s, mint=35699msec, maxt=36564msec
> 
> Run status group 1 (all jobs):
>    READ: io=61440MB, aggrb=1620.4MB/s, minb=414817KB/s,
> maxb=414850KB/s, mint=37914msec, maxt=37917msec
> 
> Run status group 2 (all jobs):
>   READ: io=30615MB, aggrb=897979KB/s, minb=224494KB/s,
> maxb=228161KB/s, mint=34351msec, maxt=34912msec
>   WRITE: io=30825MB, aggrb=904110KB/s, minb=226027KB/s,
> maxb=229718KB/s, mint=34351msec, maxt=34912msec
> 
> Run status group 3 (all jobs):
>    READ: io=30615MB, aggrb=772002KB/s, minb=193000KB/s,
> maxb=193010KB/s, mint=40607msec, maxt=40609msec
>   WRITE: io=30825MB, aggrb=777273KB/s, minb=194318KB/s,
> maxb=194327KB/s, mint=40607msec, maxt=40609msec
> 
> z3fold:
> Run status group 0 (all jobs):
>   WRITE: io=61440MB, aggrb=1224.8MB/s, minb=313525KB/s,
> maxb=329941KB/s, mint=47671msec, maxt=50167msec
> 
> Run status group 1 (all jobs):
>    READ: io=61440MB, aggrb=3119.3MB/s, minb=798529KB/s,
> maxb=862883KB/s, mint=18228msec, maxt=19697msec
> 
> Run status group 2 (all jobs):
>    READ: io=30615MB, aggrb=937283KB/s, minb=234320KB/s,
> maxb=234334KB/s, mint=33446msec, maxt=33448msec
>   WRITE: io=30825MB, aggrb=943682KB/s, minb=235920KB/s,
> maxb=235934KB/s, mint=33446msec, maxt=33448msec
> 
> Run status group 3 (all jobs):
>    READ: io=30615MB, aggrb=829591KB/s, minb=207397KB/s,
> maxb=210285KB/s, mint=37271msec, maxt=37790msec
>   WRITE: io=30825MB, aggrb=835255KB/s, minb=208813KB/s,
> maxb=211721KB/s, mint=37271msec, maxt=37790msec
> 
> So, z3fold is faster everywhere (including being *two* times faster on
> read) except for sequential write which is the least important use
> case in real world.

No. write is also important because it affects reclaim speed.

I ran fio on x86 with various compression sizes.

left is zsmalloc. right is z3fold

The operation order is
	seq-write
	rand-write
	seq-read
	rand-read
	mixed-seq
	mixed-rand
	trim
	mem_used - byte unit

Last column mem_used is to indicate how many allocator used the memory
to store compressed page

1) compression ratio 75

     WRITE     2535          WRITE     1928
     WRITE     2425          WRITE     1886
      READ     6211           READ     5731
      READ     6339           READ     6182
      READ     1791           READ     1592
     WRITE     1790          WRITE     1591
      READ     1704           READ     1493
     WRITE     1699          WRITE     1489
     WRITE      984          WRITE      974
      TRIM      984           TRIM      974
  mem_used 29986816       mem_used 61239296

For every operation, zsmalloc is faster than z3fold.
Even, it used the 1/2 memory compared to z3fold.

2) compression ratio 66

     WRITE     2125          WRITE     1258
     WRITE     2107          WRITE     1233
      READ     5714           READ     5793
      READ     5948           READ     6065
      READ     1667           READ     1248
     WRITE     1666          WRITE     1247
      READ     1521           READ     1218
     WRITE     1517          WRITE     1215
     WRITE      943          WRITE      870
      TRIM      943           TRIM      870
  mem_used 38158336       mem_used 76779520

For only read operation, z3fold is a bit faster than zsmalloc about 2%.
However, look at other operations which zsmalloc is much faster.
Even, look at used memory.

3) compression ratio 50

     WRITE     2051          WRITE     1109
     WRITE     2029          WRITE     1087
      READ     5366           READ     6364
      READ     5575           READ     5785
      READ     1497           READ     1121
     WRITE     1496          WRITE     1121
      READ     1432           READ     1065
     WRITE     1428          WRITE     1062
     WRITE      930          WRITE      838
      TRIM      930           TRIM      838
  mem_used 59932672       mem_used 104873984

sequential read on z3fold is faster about 15%. However, look at other
operations and used memory. zsmalloc is better.

Why zsmalloc is slow for 50% compression ratio is it needs page copy
for every read operation since compressed objects cross over page boundary.
However, I don't think it's real workload because compressed ratio will
spread out into various sizes. Having said that, I could enhance zsmalloc
to avoid the copy operation. I will work on it.

4) compression ratio 33

     WRITE     1945          WRITE     1239
     WRITE     1869          WRITE     1222
      READ     5319           READ     6206
      READ     5416           READ     6645
      READ     1480           READ     1188
     WRITE     1479          WRITE     1188
      READ     1403           READ     1114
     WRITE     1399          WRITE     1110
     WRITE      930          WRITE      793
      TRIM      930           TRIM      793
  mem_used 78667776       mem_used 104873984

5) compression ratio 25

     WRITE     1862          WRITE     1080
     WRITE     1840          WRITE     1052
      READ     5260           READ     6240
      READ     5540           READ     6359
      READ     1445           READ     1040
     WRITE     1444          WRITE     1039
      READ     1354           READ     1006
     WRITE     1350          WRITE     1003
     WRITE      909          WRITE      775
      TRIM      909           TRIM      775
  mem_used 83902464       mem_used 104873984

If compress ratio is bad, zram read operation with zsmalloc could
be slower about 15% than z3fold because it needs additional memory
copy as I mentioned. However, it's still faster if compression ratio
ig greater than 50%, which is usual case(I believe that's why you
makes z3fold).

> 
> > > Then, there are low end systems with hardware
> > > compression/decompression support which don't need zsmalloc
> > > sophistication and would rather use zbud with ZRAM because the
> > > compression ratio is relatively low.
> >
> > I couldn't imagine how it's bad with zsmalloc. Could you be more
> > specific?
> 
> 
> > > Finally, there are MMU-less systems targeting IOT and still running
> > > Linux and having a compressed RAM disk is something that would help
> > > these systems operate in a better way (for the benefit of the overall
> > > Linux ecosystem, if you care about that, of course; well, some people
> > > do).
> >
> > Could you write down what's the problem to use zsmalloc for MMU-less
> > system? Maybe, it would be important point rather other performance
> > argument since other functions's overheads in the callpath are already
> > rather big.
> 
> Well, I assume you had the reasons to make zsmalloc depend on MMU in Kconfig:
> ...
> config ZSMALLOC
>     tristate "Memory allocator for compressed pages"
>     depends on MMU
>     help
> ...

It's old piece left since zsmalloc used mapping API so I think we could
remove the dependency now. However, I want to know it's the only problem
to use zram in MMU-less system. IOW, if we could remove the zsmalloc MMU
dependency, it's ready to use zram on MMU-less system now?

> 
> But even disregarding that, let's compare ZRAM/zbud and ZRAM/zsmalloc
> performance and memory these two consume on a relatively low end
> 2-core ARM.
> Command:
> fio --bs=4k --randrepeat=1 --randseed=100 --refill_buffers
> --scramble_buffers=1 \
>         --direct=1 --loops=15 --numjobs=2 --filename=/dev/block/zram0 \
>         --name=seq-write --rw=write --stonewall --name=seq-read --rw=read \
>         --stonewall --name=seq-readwrite --rw=rw --stonewall
> --name=rand-readwrite \
>         --rw=randrw --stonewall
> 
> zsmalloc:
> Run status group 0 (all jobs):
>   WRITE: io=30720MB, aggrb=374763KB/s, minb=187381KB/s,
> maxb=188389KB/s, mint=83490msec, maxt=83939msec
> 
> Run status group 1 (all jobs):
>    READ: io=30720MB, aggrb=964000KB/s, minb=482000KB/s,
> maxb=482015KB/s, mint=32631msec, maxt=32632msec
> 
> Run status group 2 (all jobs):
>    READ: io=15308MB, aggrb=431263KB/s, minb=215631KB/s,
> maxb=215898KB/s, mint=36302msec, maxt=36347msec
>   WRITE: io=15412MB, aggrb=434207KB/s, minb=217103KB/s,
> maxb=217373KB/s, mint=36302msec, maxt=36347msec
> 
> Run status group 3 (all jobs):
>    READ: io=15308MB, aggrb=327328KB/s, minb=163664KB/s,
> maxb=163667KB/s, mint=47887msec, maxt=47888msec
>   WRITE: io=15412MB, aggrb=329563KB/s, minb=164781KB/s,
> maxb=164785KB/s, mint=47887msec, maxt=47888msec
> 
> zbud:
> Run status group 0 (all jobs):
>   WRITE: io=30720MB, aggrb=735980KB/s, minb=367990KB/s,
> maxb=373079KB/s, mint=42159msec, maxt=42742msec
> 
> Run status group 1 (all jobs):
>    READ: io=30720MB, aggrb=927915KB/s, minb=463957KB/s,
> maxb=463999KB/s, mint=33898msec, maxt=33901msec
> 
> Run status group 2 (all jobs):
>    READ: io=15308MB, aggrb=403467KB/s, minb=201733KB/s,
> maxb=202051KB/s, mint=38790msec, maxt=38851msec
>   WRITE: io=15412MB, aggrb=406222KB/s, minb=203111KB/s,
> maxb=203430KB/s, mint=38790msec, maxt=38851msec
> 
> Run status group 3 (all jobs):
>    READ: io=15308MB, aggrb=334967KB/s, minb=167483KB/s,
> maxb=167487KB/s, mint=46795msec, maxt=46796msec
>   WRITE: io=15412MB, aggrb=337254KB/s, minb=168627KB/s,
> maxb=168630KB/s, mint=46795msec, maxt=46796msec
> 
> Pretty equal except for sequential write which is twice as good with zbud.

Thanks for the testing. I also tried to test zbud with zram but failed because fio
submit incompressible pages to zram even though it specifiy compress ratio 100%
However, zbud doesn't support 4K page allocation so zram couldn't work on it
at this moment. I tried various fio versions as well as old but everything failed.

How did you test it successfully? Let me know your fio version.
I want to investigate what's the performance bottleneck beside page copy
so that I will optimize it.

> 
> Now to the fun part.
> zsmalloc:
>   0 .text         00002908  0000000000000000  0000000000000000  00000040  2**2
>                   CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
> zbud:
>   0 .text         0000072c  0000000000000000  0000000000000000  00000040  2**2
>                   CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
> 
> And this does not cover dynamic memory allocation overhead which is
> higher for zsmalloc. So once again, given that the compression ratio
> is low (e. g. a simple HW accelerator is used), what would most
> unbiased people prefer to use in this case?

Zsmalloc has more features than zbud. That's why you see the code size
difference. It was intentional because at that time most of users were
mobile phones, TV and other smart devices. They needed those features.

We could make those feature turned off at build time, which will improve
performance and reduce code size a lot. It would be no problem if the
user wanted to use zbud which is alredy lacking of those features.

> 
> > > > I really don't want to make fragmentaion of allocator so we should really see
> > > > how zsmalloc cannot achieve things if you are claiming.
> > >
> > > I have to say that this point is completely bogus. We do not create
> > > fragmentation by using a better defined and standardized API. In fact,
> > > we aim to increase the number of use cases and test coverage for ZRAM.
> > > With that said, I have hard time seeing how zsmalloc can operate on a
> > > MMU-less system.
> > >
> > > > Please tell us how to test it so that we could investigate what's the root
> > > > cause.
> > >
> > > I gather you haven't read neither the LPC documents nor my
> > > conversation with Sergey re: these changes, because if you did you
> > > wouldn't have had the type of questions you're asking. Please also see
> > > above.
> >
> > Please include your claims in the description rather than attaching
> > file. That's the usualy way how we work because it could make easier to
> > discuss by inline.
> 
> Did I attach something? I don't quite recall that. I posted links to
> previous discussions and conference materials, each for a reason.
> 
> > >
> > > I feel a bit awkward explaining basic things to you but there may not
> > > be other "root cause" than applicability issue. zsmalloc is a great
> > > allocator but it's not universal and has its limitations. The
> > > (potential) scope for ZRAM is wider than zsmalloc can provide. We are
> > > *helping* _you_ to extend this scope "in real world" (c) and you come
> > > up with bogus objections. Why?
> >
> > Please add more detail to convince so we need to think over why zsmalloc
> > cannot be improved for the usecase.
> 
> This approach is wrong. zsmalloc is good enough and covers a lot of
> use cases but there are still some where it doesn't work that well by
> design. E. g. on an XIP system we do care about the code size since
> it's stored uncompressed but still want to use ZRAM. Why would we want
> to waste almost 10K just on zsmalloc code if the counterpart (zbud in
> that case) works better?

As I mentiond, we could improve zsmalloc to reduce code size as well as
performance. I will work on it.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend
  2019-10-30  0:10         ` Minchan Kim
@ 2019-11-13 15:54           ` Vitaly Wool
  0 siblings, 0 replies; 17+ messages in thread
From: Vitaly Wool @ 2019-11-13 15:54 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Linux-MM, Andrew Morton, Dan Streetman, Sergey Senozhatsky, LKML,
	Vlastimil Babka, Shakeel Butt, Henry Burns, Theodore Ts'o

Hi Minchan,

On Wed, Oct 30, 2019 at 1:10 AM Minchan Kim <minchan@kernel.org> wrote:

<snip>
>
>
> I ran fio on x86 with various compression sizes.
>
> left is zsmalloc. right is z3fold
>
> The operation order is
>         seq-write
>         rand-write
>         seq-read
>         rand-read
>         mixed-seq
>         mixed-rand
>         trim
>         mem_used - byte unit
>
> Last column mem_used is to indicate how many allocator used the memory
> to store compressed page
>
> 1) compression ratio 75
>
>      WRITE     2535          WRITE     1928
>      WRITE     2425          WRITE     1886
>       READ     6211           READ     5731
>       READ     6339           READ     6182
>       READ     1791           READ     1592
>      WRITE     1790          WRITE     1591
>       READ     1704           READ     1493
>      WRITE     1699          WRITE     1489
>      WRITE      984          WRITE      974
>       TRIM      984           TRIM      974
>   mem_used 29986816       mem_used 61239296
>
> For every operation, zsmalloc is faster than z3fold.
> Even, it used the 1/2 memory compared to z3fold.
>
> 2) compression ratio 66
>
>      WRITE     2125          WRITE     1258
>      WRITE     2107          WRITE     1233
>       READ     5714           READ     5793
>       READ     5948           READ     6065
>       READ     1667           READ     1248
>      WRITE     1666          WRITE     1247
>       READ     1521           READ     1218
>      WRITE     1517          WRITE     1215
>      WRITE      943          WRITE      870
>       TRIM      943           TRIM      870
>   mem_used 38158336       mem_used 76779520
>
> For only read operation, z3fold is a bit faster than zsmalloc about 2%.
> However, look at other operations which zsmalloc is much faster.
> Even, look at used memory.
>
> 3) compression ratio 50
>
>      WRITE     2051          WRITE     1109
>      WRITE     2029          WRITE     1087
>       READ     5366           READ     6364
>       READ     5575           READ     5785
>       READ     1497           READ     1121
>      WRITE     1496          WRITE     1121
>       READ     1432           READ     1065
>      WRITE     1428          WRITE     1062
>      WRITE      930          WRITE      838
>       TRIM      930           TRIM      838
>   mem_used 59932672       mem_used 104873984
>
> sequential read on z3fold is faster about 15%. However, look at other
> operations and used memory. zsmalloc is better.

There are two things to this: the measurements you've taken as such
and how they are relevant to this discussion.
I'd be happy to discuss these measurements in a separate thread if you
specified more precisely what kind of x86 the measurements were taken
on.

However, my point was that there are rather common cases when people
want to use z3fold as a zRAM memory allocation backend. The fact that
there are other cases when people wouldn't want that is pretty natural
and doesn't need a proof.
That's why I propose to use ZRAM over zpool API for the sake of
flexibility. That would benefit various users of ZRAM and, at the end
of the day, the Linux kernel ecosystem.

<snip>
> Thanks for the testing. I also tried to test zbud with zram but failed because fio
> submit incompressible pages to zram even though it specifiy compress ratio 100%
> However, zbud doesn't support 4K page allocation so zram couldn't work on it
> at this moment. I tried various fio versions as well as old but everything failed.
>
> How did you test it successfully? Let me know your fio version.
> I want to investigate what's the performance bottleneck beside page copy
> so that I will optimize it.

You're very welcome. :) The patch to make zbud accept PAGE_SIZE pages
has been posted a while ago [1] and it was a part of our previous
(pre-z3fold) discussion on the same subject but you probably haven't
read it then.

> >
> > Now to the fun part.
> > zsmalloc:
> >   0 .text         00002908  0000000000000000  0000000000000000  00000040  2**2
> >                   CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
> > zbud:
> >   0 .text         0000072c  0000000000000000  0000000000000000  00000040  2**2
> >                   CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE
> >
> > And this does not cover dynamic memory allocation overhead which is
> > higher for zsmalloc. So once again, given that the compression ratio
> > is low (e. g. a simple HW accelerator is used), what would most
> > unbiased people prefer to use in this case?
>
> Zsmalloc has more features than zbud. That's why you see the code size
> difference. It was intentional because at that time most of users were
> mobile phones, TV and other smart devices. They needed those features.
>
> We could make those feature turned off at build time, which will improve
> performance and reduce code size a lot. It would be no problem if the
> user wanted to use zbud which is alredy lacking of those features.

I do support this idea and would like to help as much as I can, but
why should the people who want to use ZRAM/zbud combo be left stranded
while we're working on reducing the zsmalloc code size by 4x?

With that said, let me also re-iterate that there may be more
allocators coming, and in some cases zsmalloc won't be a good
fit/alternative while there will be still a need for a compressed RAM
device. I hope you understand.

Best regards,
   Vitaly

[1] https://lore.kernel.org/patchwork/patch/598210/

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-11-13 16:31 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-10 20:04 [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Vitaly Wool
2019-10-10 20:09 ` [PATCH 1/3] zpool: extend API to match zsmalloc Vitaly Wool
2019-10-18 11:23   ` Dan Streetman
2019-10-10 20:11 ` [PATCH 2/3] zsmalloc: add compaction and huge class callbacks Vitaly Wool
2019-10-14 10:38   ` Sergey Senozhatsky
2019-10-10 20:20 ` [PATCH 3/3] zram: use common zpool interface Vitaly Wool
2019-10-14 10:47   ` Sergey Senozhatsky
2019-10-14 11:52     ` Vitaly Wool
2019-10-15  2:04       ` Sergey Senozhatsky
2019-10-14 10:33 ` [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend Sergey Senozhatsky
2019-10-14 11:49   ` Vitaly Wool
2019-10-14 16:41 ` Minchan Kim
2019-10-15  7:39   ` Vitaly Wool
2019-10-15 20:00     ` Minchan Kim
2019-10-21 14:21       ` Vitaly Wool
2019-10-30  0:10         ` Minchan Kim
2019-11-13 15:54           ` Vitaly Wool

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).