linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
@ 2022-11-21 19:00 Alexey Romanov
  2022-11-21 19:00 ` [RFC PATCH v1 1/4] zram: introduce " Alexey Romanov
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Alexey Romanov @ 2022-11-21 19:00 UTC (permalink / raw)
  To: minchan, senozhatsky, ngupta, akpm
  Cc: linux-mm, linux-kernel, kernel, ddrokosov, Alexey Romanov

Hello!

This RFC series adds feature which allows merge identical
compressed pages into a single one. The main idea is that
zram only stores object references, which store the compressed
content of the pages. Thus, the contents of the zsmalloc objects
don't change in any way.

For simplicity, let's imagine that 3 pages with the same content
got into zram:

+----------------+   +----------------+   +----------------+
|zram_table_entry|   |zram_table_entry|   |zram_table_entry|
+-------+--------+   +-------+--------+   +--------+-------+
        |                    |                     |
        | handle (1)         | handle (2)          | handle (3)
+-------v--------+   +-------v---------+  +--------v-------+
|zsmalloc  object|   |zsmalloc  object |  |zsmalloc  object|
++--------------++   +-+-------------+-+  ++--------------++
 +--------------+      +-------------+     +--------------+
 | buffer: "abc"|      |buffer: "abc"|     | buffer: "abc"|
 +--------------+      +-------------+     +--------------+

As you can see, the data is duplicated. Merge mechanism saves
(after scanning objects) only one zsmalloc object. Here's
what happens ater the scan and merge:

+----------------+   +----------------+   +----------------+
|zram_table_entry|   |zram_table_entry|   |zram_tabl _entry|
+-------+--------+   +-------+--------+   +--------+-------+
        |                    |                     |
        | handle (1)         | handle (1)          | handle (1)
        |           +--------v---------+           |
        +-----------> zsmalloc  object <-----------+
                    +--+-------------+-+
                       +-------------+
                       |buffer: "abc"|
                       +-------------+

Thus, we reduced the amount of memory occupied by 3 times.

This mechanism doesn't affect the perf of the zram itself in
any way (maybe just a little bit on the zram_free_page function).
In order to describe each such identical object, we (constantly)
need sizeof(zram_rbtree_node) bytes. So, for example, if the system
has 20 identical buffers with a size of 1024, the memory gain will be
(20 * 1024) - (1 * 1024 + sizeof(zram_rbtree_node)) = 19456 -
sizeof(zram_rbtree_node) bytes. But, it should be understood, these are
counts without zsmalloc data structures overhead.

Testing on my system (8GB ram + 1 gb zram swap) showed that at high 
loads, on average, when calling the merge mechanism, we can save 
up to 15-20% of the memory usage.

This patch serices adds a new sysfs node (trigger merging) and new 
field in mm_stat (how many pages are merged in zram at the moment):

  $ cat /sys/block/zram/mm_stat
    431452160 332984392 339894272 0 339894272 282 0 51374 51374 0

  $ echo 1 > /sys/block/zram/merge

  $ cat /sys/block/zram/mm_stat
    431452160 270376848 287301504 0 339894272 282 0 51374 51374 6593

Alexey Romanov (4):
  zram: introduce merge identical pages mechanism
  zram: add merge sysfs knob
  zram: add pages_merged counter to mm_stat
  zram: recompression: add ZRAM_MERGED check

 Documentation/admin-guide/blockdev/zram.rst |   2 +
 drivers/block/zram/zram_drv.c               | 315 +++++++++++++++++++-
 drivers/block/zram/zram_drv.h               |   7 +
 3 files changed, 320 insertions(+), 4 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [RFC PATCH v1 1/4] zram: introduce merge identical pages mechanism
  2022-11-21 19:00 [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Alexey Romanov
@ 2022-11-21 19:00 ` Alexey Romanov
  2022-11-23  8:25   ` Chen Wandun
  2022-11-21 19:00 ` [RFC PATCH v1 2/4] zram: add merge sysfs knob Alexey Romanov
  2022-11-21 20:44 ` [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Johannes Weiner
  2 siblings, 1 reply; 18+ messages in thread
From: Alexey Romanov @ 2022-11-21 19:00 UTC (permalink / raw)
  To: minchan, senozhatsky, ngupta, akpm
  Cc: linux-mm, linux-kernel, kernel, ddrokosov, Alexey Romanov

zram maps each page (struct page) to a zsmalloc object that
contains a compressed buffer of that page's data. This fact
generates data redundancy: for example, two identical pages
will be store in compressed form in zsmalloc allocator twice.

This patch adds a mechanism to scan zram_table_entry array
and frees all identical objects in zsmalloc allocator, leaving
only one. All zram_table_entry elements which reference this
freed objects now refer to the same, not freed, object.

To implement this mechanism, we sequentially scan the
zram_table_entry array, counting the hash from the contents of
the compressed pages (zsmalloc objects) and enter the index of
the object into the hash table (hlist_head). If the hash matches,
we remove the identical zsmalloc (zs_free) objects and update
the link rbtree.

Also, we can't just call zs_free() function during zram_free_page().
Before calling this function we have to make sure that no one else
refers to this zsmalloc object.

To implement the mechanism for merging identical compressed
pages (zsmalloc objects), a rbtree is needed.

The tree node key is a reference to the zsmalloc object
(handle), and the value is the number of references to
this object (atomic counter). This is necessary for data
consistency so that we do not zs_free the object referenced
by any element of the zram_table_entry array.

Signed-off-by: Alexey Romanov <avromanov@sberdevices.ru>
---
 drivers/block/zram/zram_drv.c | 278 +++++++++++++++++++++++++++++++++-
 drivers/block/zram/zram_drv.h |   6 +
 2 files changed, 283 insertions(+), 1 deletion(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index e290d6d97047..716c2f72805e 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -33,12 +33,15 @@
 #include <linux/debugfs.h>
 #include <linux/cpuhotplug.h>
 #include <linux/part_stat.h>
+#include <linux/hashtable.h>
+#include <linux/xxhash.h>
 
 #include "zram_drv.h"
 
 static DEFINE_IDR(zram_index_idr);
 /* idr index must be protected */
 static DEFINE_MUTEX(zram_index_mutex);
+static DEFINE_MUTEX(zram_rbtree_mutex);
 
 static int zram_major;
 static const char *default_compressor = CONFIG_ZRAM_DEF_COMP;
@@ -57,6 +60,16 @@ static void zram_free_page(struct zram *zram, size_t index);
 static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
 				u32 index, int offset, struct bio *bio);
 
+struct zram_rbtree_node {
+	struct rb_node node;
+	unsigned long key;
+	unsigned long cnt;
+};
+
+struct zram_hash_node {
+	unsigned long index;
+	struct hlist_node next;
+};
 
 static int zram_slot_trylock(struct zram *zram, u32 index)
 {
@@ -1283,6 +1296,247 @@ static DEVICE_ATTR_RO(bd_stat);
 #endif
 static DEVICE_ATTR_RO(debug_stat);
 
+static bool zram_rbtree_insert(struct rb_root *root, struct zram_rbtree_node *data)
+{
+	struct rb_node **new = &(root->rb_node), *parent = NULL;
+	struct zram_rbtree_node *this;
+
+	while (*new) {
+		this = rb_entry(*new, struct zram_rbtree_node, node);
+		parent = *new;
+		if (data->key < this->key)
+			new = &((*new)->rb_left);
+		else if (data->key > this->key)
+			new = &((*new)->rb_right);
+		else
+			return false;
+	}
+
+	rb_link_node(&data->node, parent, new);
+	rb_insert_color(&data->node, root);
+	return true;
+}
+
+static struct zram_rbtree_node *zram_rbtree_search(struct rb_root *root,
+		unsigned long key)
+{
+	struct rb_node *node = root->rb_node;
+	struct zram_rbtree_node *data;
+
+	while (node) {
+		data = rb_entry(node, struct zram_rbtree_node, node);
+		if (key < data->key)
+			node = node->rb_left;
+		else if (key > data->key)
+			node = node->rb_right;
+		else
+			return data;
+	}
+
+	return NULL;
+}
+
+static unsigned long zram_calc_hash(void *src, size_t len)
+{
+	return xxhash(src, len, 0);
+}
+
+static int zram_cmp_obj_and_merge(struct zram *zram, struct hlist_head *htable,
+		size_t htable_size, size_t index)
+{
+	struct zram_rbtree_node *rb_node;
+	struct zram_hash_node *node;
+	unsigned long handle, cur_handle;
+	size_t obj_size;
+	char *src, *buf;
+	unsigned long hash;
+	int ret = 0;
+
+	handle = zram_get_handle(zram, index);
+	if (!handle)
+		return ret;
+
+	obj_size = zram_get_obj_size(zram, index);
+	buf = kmalloc(obj_size, GFP_KERNEL);
+	if (!buf) {
+		pr_err("Failed to allocate zs_map_object buffer\n");
+		return -ENOMEM;
+	}
+
+	src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
+	memcpy(buf, src, obj_size);
+	zs_unmap_object(zram->mem_pool, handle);
+	hash = zram_calc_hash(buf, obj_size);
+
+	mutex_lock(&zram_rbtree_mutex);
+	hlist_for_each_entry(node, &htable[hash % htable_size], next) {
+		int cmp;
+
+		zram_slot_lock(zram, node->index);
+
+		/*
+		 * Page may change as the hash table is being formed,
+		 * so the checks below are necessary.
+		 */
+		cur_handle = zram_get_handle(zram, node->index);
+		if (handle == cur_handle ||
+			obj_size != zram_get_obj_size(zram, node->index)) {
+			zram_slot_unlock(zram, node->index);
+			continue;
+		}
+
+		src = zs_map_object(zram->mem_pool, cur_handle, ZS_MM_RO);
+		cmp = memcmp(buf, src, obj_size);
+		zs_unmap_object(zram->mem_pool, cur_handle);
+
+		if (!cmp) {
+			rb_node = zram_rbtree_search(&zram->sph_rbtree, handle);
+
+			/*
+			 * This check is necessary in order not to zs_free an object
+			 * that someone already refers to. This situation is possible
+			 * when with repeated calls to zram_do_scan(). For example:
+			 *
+			 * [slot0] [slot1] [slot2] [slot3] [slot4]
+			 * [obj0]  [obj1]  [obj2]  [obj3]  [obj4]
+			 *
+			 * Let's imagine that obj2 and obj3 are equal, and we called
+			 * zram_do_scan() function:
+			 *
+			 * [slot0] [slot1] [slot2] [slot3] [slot4]
+			 * [obj0]  [obj1]  [obj2]  [obj2]  [obj4]
+			 *
+			 * Now, slot2 and slot3 refers to obj2 zsmalloc object.
+			 * Time passed, now slot0 refres to obj0_n, which is equal
+			 * to obj2:
+			 *
+			 * [slot0]  [slot1] [slot2] [slot3] [slot4]
+			 * [obj0_n] [obj1]  [obj2]  [obj2]  [obj4]
+			 *
+			 * Now we call zram_do_scan() function again. We get to slot2,
+			 * and we understand that obj2 and obj0_n hashes are the same. We
+			 * try to zs_free(obj2), but slot3 also already refers to it.
+			 *
+			 * This is not correct!
+			 */
+			if (unlikely(rb_node))
+				if (rb_node->cnt > 1) {
+					zram_slot_unlock(zram, node->index);
+					continue;
+				}
+
+			zram_set_handle(zram, index, cur_handle);
+			zs_free(zram->mem_pool, handle);
+
+			rb_node = zram_rbtree_search(&zram->sph_rbtree, cur_handle);
+
+			if (!rb_node) {
+				rb_node = kzalloc(sizeof(struct zram_rbtree_node),
+								GFP_KERNEL);
+				if (!rb_node) {
+					pr_err("Failed to allocate rb_node\n");
+					ret = -ENOMEM;
+					zram_slot_unlock(zram, node->index);
+					mutex_unlock(&zram_rbtree_mutex);
+					goto merged_or_err;
+				}
+
+				rb_node->key = cur_handle;
+				/* Two slots refers to an zsmalloc object with cur_handle key */
+				rb_node->cnt = 2;
+				zram_rbtree_insert(&zram->sph_rbtree, rb_node);
+			} else {
+				rb_node->cnt++;
+			}
+
+			atomic64_sub(obj_size, &zram->stats.compr_data_size);
+			zram_set_flag(zram, index, ZRAM_MERGED);
+			zram_set_flag(zram, node->index, ZRAM_MERGED);
+
+			zram_slot_unlock(zram, node->index);
+			mutex_unlock(&zram_rbtree_mutex);
+			goto merged_or_err;
+		}
+
+		zram_slot_unlock(zram, node->index);
+	}
+
+	mutex_unlock(&zram_rbtree_mutex);
+
+	node = kmalloc(sizeof(struct zram_hash_node), GFP_KERNEL);
+	if (!node) {
+		ret = -ENOMEM;
+		goto merged_or_err;
+	}
+
+	node->index = index;
+	hlist_add_head(&node->next, &htable[hash % htable_size]);
+
+merged_or_err:
+	kfree(buf);
+	return ret;
+}
+
+static void zram_free_htable_entries(struct hlist_head *htable,
+		size_t htable_size)
+{
+	struct hlist_node *n;
+	struct zram_hash_node *node;
+
+	hlist_for_each_entry_safe(node, n, htable, next) {
+		hlist_del(&node->next);
+		kfree(node);
+	}
+}
+
+static int zram_do_scan(struct zram *zram)
+{
+	size_t num_pages = zram->disksize >> PAGE_SHIFT;
+	size_t htable_size = num_pages;
+	size_t index;
+	struct hlist_head *htable;
+	int i, ret = 0;
+
+	htable = vzalloc(htable_size * sizeof(struct hlist_head));
+	if (!htable) {
+		pr_err("Failed to allocate hash table\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < htable_size; i++)
+		INIT_HLIST_HEAD(&htable[i]);
+
+	for (index = 0; index < num_pages; index++) {
+		zram_slot_lock(zram, index);
+
+		if (!zram_allocated(zram, index)) {
+			zram_slot_unlock(zram, index);
+			continue;
+		}
+
+		if (zram_test_flag(zram, index, ZRAM_UNDER_WB) ||
+			zram_test_flag(zram, index, ZRAM_WB) ||
+			zram_test_flag(zram, index, ZRAM_SAME)) {
+			zram_slot_unlock(zram, index);
+			continue;
+		}
+
+		/* Ignore pages that have been recompressed */
+		if (zram_get_priority(zram, index) != 0)
+			continue;
+
+		ret = zram_cmp_obj_and_merge(zram, htable, htable_size, index);
+		zram_slot_unlock(zram, index);
+		if (ret != 0)
+			goto out;
+	}
+
+out:
+	zram_free_htable_entries(htable, htable_size);
+	vfree(htable);
+	return ret;
+}
+
 static void zram_meta_free(struct zram *zram, u64 disksize)
 {
 	size_t num_pages = disksize >> PAGE_SHIFT;
@@ -1324,6 +1578,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
 static void zram_free_page(struct zram *zram, size_t index)
 {
 	unsigned long handle;
+	struct zram_rbtree_node *node;
 
 #ifdef CONFIG_ZRAM_MEMORY_TRACKING
 	zram->table[index].ac_time = 0;
@@ -1361,7 +1616,26 @@ static void zram_free_page(struct zram *zram, size_t index)
 	if (!handle)
 		return;
 
-	zs_free(zram->mem_pool, handle);
+	if (zram_test_flag(zram, index, ZRAM_MERGED)) {
+		zram_clear_flag(zram, index, ZRAM_MERGED);
+		mutex_lock(&zram_rbtree_mutex);
+
+		node = zram_rbtree_search(&zram->sph_rbtree, handle);
+		BUG_ON(!node);
+
+		node->cnt--;
+		if (node->cnt == 0) {
+			rb_erase(&node->node, &zram->sph_rbtree);
+			mutex_unlock(&zram_rbtree_mutex);
+
+			zs_free(zram->mem_pool, handle);
+			kfree(node);
+		} else {
+			mutex_unlock(&zram_rbtree_mutex);
+		}
+	} else {
+		zs_free(zram->mem_pool, handle);
+	}
 
 	atomic64_sub(zram_get_obj_size(zram, index),
 			&zram->stats.compr_data_size);
@@ -2421,6 +2695,8 @@ static int zram_add(void)
 
 	comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
 
+	zram->sph_rbtree = RB_ROOT;
+
 	zram_debugfs_register(zram);
 	pr_info("Added device: %s\n", zram->disk->disk_name);
 	return device_id;
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index c5254626f051..4a7151c94523 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -56,6 +56,7 @@ enum zram_pageflags {
 
 	ZRAM_COMP_PRIORITY_BIT1, /* First bit of comp priority index */
 	ZRAM_COMP_PRIORITY_BIT2, /* Second bit of comp priority index */
+	ZRAM_MERGED,	/* page was merged */
 
 	__NR_ZRAM_PAGEFLAGS,
 };
@@ -140,5 +141,10 @@ struct zram {
 #ifdef CONFIG_ZRAM_MEMORY_TRACKING
 	struct dentry *debugfs_dir;
 #endif
+	/*
+	 * This is same pages handle's rb tree, where the key is a handle
+	 * to same pages and the value is a link counter
+	 */
+	struct rb_root sph_rbtree;
 };
 #endif
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [RFC PATCH v1 2/4] zram: add merge sysfs knob
  2022-11-21 19:00 [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Alexey Romanov
  2022-11-21 19:00 ` [RFC PATCH v1 1/4] zram: introduce " Alexey Romanov
@ 2022-11-21 19:00 ` Alexey Romanov
  2022-11-21 20:44 ` [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Johannes Weiner
  2 siblings, 0 replies; 18+ messages in thread
From: Alexey Romanov @ 2022-11-21 19:00 UTC (permalink / raw)
  To: minchan, senozhatsky, ngupta, akpm
  Cc: linux-mm, linux-kernel, kernel, ddrokosov, Alexey Romanov

Allow zram to merge identical pages into signle one:

echo 1 > /sys/block/zramX/merge

Signed-off-by: Alexey Romanov <avromanov@sberdevices.ru>
---
 drivers/block/zram/zram_drv.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 716c2f72805e..1dae3564cabd 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1197,6 +1197,30 @@ static ssize_t compact_store(struct device *dev,
 	return len;
 }
 
+static int zram_do_scan(struct zram *zram);
+
+static ssize_t merge_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t len)
+{
+	struct zram *zram = dev_to_zram(dev);
+	int ret;
+
+	down_read(&zram->init_lock);
+	if (!init_done(zram)) {
+		up_read(&zram->init_lock);
+		return -EINVAL;
+	}
+
+	ret = zram_do_scan(zram);
+	if (ret != 0) {
+		up_read(&zram->init_lock);
+		return -ENOMEM;
+	}
+
+	up_read(&zram->init_lock);
+	return len;
+}
+
 static ssize_t io_stat_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
 {
@@ -2569,6 +2593,7 @@ static const struct block_device_operations zram_devops = {
 };
 
 static DEVICE_ATTR_WO(compact);
+static DEVICE_ATTR_WO(merge);
 static DEVICE_ATTR_RW(disksize);
 static DEVICE_ATTR_RO(initstate);
 static DEVICE_ATTR_WO(reset);
@@ -2609,6 +2634,7 @@ static struct attribute *zram_disk_attrs[] = {
 #ifdef CONFIG_ZRAM_WRITEBACK
 	&dev_attr_bd_stat.attr,
 #endif
+	&dev_attr_merge.attr,
 	&dev_attr_debug_stat.attr,
 #ifdef CONFIG_ZRAM_MULTI_COMP
 	&dev_attr_recomp_algorithm.attr,
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-21 19:00 [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Alexey Romanov
  2022-11-21 19:00 ` [RFC PATCH v1 1/4] zram: introduce " Alexey Romanov
  2022-11-21 19:00 ` [RFC PATCH v1 2/4] zram: add merge sysfs knob Alexey Romanov
@ 2022-11-21 20:44 ` Johannes Weiner
  2022-11-22  3:00   ` Sergey Senozhatsky
  2 siblings, 1 reply; 18+ messages in thread
From: Johannes Weiner @ 2022-11-21 20:44 UTC (permalink / raw)
  To: Alexey Romanov
  Cc: minchan, senozhatsky, ngupta, akpm, linux-mm, linux-kernel,
	kernel, ddrokosov

On Mon, Nov 21, 2022 at 10:00:16PM +0300, Alexey Romanov wrote:
> Hello!
> 
> This RFC series adds feature which allows merge identical
> compressed pages into a single one. The main idea is that
> zram only stores object references, which store the compressed
> content of the pages. Thus, the contents of the zsmalloc objects
> don't change in any way.
> 
> For simplicity, let's imagine that 3 pages with the same content
> got into zram:
> 
> +----------------+   +----------------+   +----------------+
> |zram_table_entry|   |zram_table_entry|   |zram_table_entry|
> +-------+--------+   +-------+--------+   +--------+-------+
>         |                    |                     |
>         | handle (1)         | handle (2)          | handle (3)
> +-------v--------+   +-------v---------+  +--------v-------+
> |zsmalloc  object|   |zsmalloc  object |  |zsmalloc  object|
> ++--------------++   +-+-------------+-+  ++--------------++
>  +--------------+      +-------------+     +--------------+
>  | buffer: "abc"|      |buffer: "abc"|     | buffer: "abc"|
>  +--------------+      +-------------+     +--------------+
> 
> As you can see, the data is duplicated. Merge mechanism saves
> (after scanning objects) only one zsmalloc object. Here's
> what happens ater the scan and merge:
> 
> +----------------+   +----------------+   +----------------+
> |zram_table_entry|   |zram_table_entry|   |zram_tabl _entry|
> +-------+--------+   +-------+--------+   +--------+-------+
>         |                    |                     |
>         | handle (1)         | handle (1)          | handle (1)
>         |           +--------v---------+           |
>         +-----------> zsmalloc  object <-----------+
>                     +--+-------------+-+
>                        +-------------+
>                        |buffer: "abc"|
>                        +-------------+
> 
> Thus, we reduced the amount of memory occupied by 3 times.
> 
> This mechanism doesn't affect the perf of the zram itself in
> any way (maybe just a little bit on the zram_free_page function).
> In order to describe each such identical object, we (constantly)
> need sizeof(zram_rbtree_node) bytes. So, for example, if the system
> has 20 identical buffers with a size of 1024, the memory gain will be
> (20 * 1024) - (1 * 1024 + sizeof(zram_rbtree_node)) = 19456 -
> sizeof(zram_rbtree_node) bytes. But, it should be understood, these are
> counts without zsmalloc data structures overhead.
> 
> Testing on my system (8GB ram + 1 gb zram swap) showed that at high 
> loads, on average, when calling the merge mechanism, we can save 
> up to 15-20% of the memory usage.

This looks pretty great.

However, I'm curious why it's specific to zram, and not part of
zsmalloc? That way zswap would benefit as well, without having to
duplicate the implementation. This happened for example with
page_same_filled() and zswap_is_page_same_filled().

It's zsmalloc's job to store content efficiently, so couldn't this
feature (just like the page_same_filled one) be an optimization that
zsmalloc does transparently for all its users?

> This patch serices adds a new sysfs node (trigger merging) and new 
> field in mm_stat (how many pages are merged in zram at the moment):
> 
>   $ cat /sys/block/zram/mm_stat
>     431452160 332984392 339894272 0 339894272 282 0 51374 51374 0
> 
>   $ echo 1 > /sys/block/zram/merge
> 
>   $ cat /sys/block/zram/mm_stat
>     431452160 270376848 287301504 0 339894272 282 0 51374 51374 6593

The optimal frequency for calling this is probably tied to prevalent
memory pressure, which is somewhat tricky to do from userspace.

Would it make sense to hook this up to a shrinker?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-21 20:44 ` [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Johannes Weiner
@ 2022-11-22  3:00   ` Sergey Senozhatsky
  2022-11-22  3:07     ` Sergey Senozhatsky
  0 siblings, 1 reply; 18+ messages in thread
From: Sergey Senozhatsky @ 2022-11-22  3:00 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Alexey Romanov, minchan, senozhatsky, ngupta, akpm, linux-mm,
	linux-kernel, kernel, ddrokosov

On (22/11/21 15:44), Johannes Weiner wrote:
> This looks pretty great.
> 
> However, I'm curious why it's specific to zram, and not part of
> zsmalloc? That way zswap would benefit as well, without having to
> duplicate the implementation. This happened for example with
> page_same_filled() and zswap_is_page_same_filled().
> 
> It's zsmalloc's job to store content efficiently, so couldn't this
> feature (just like the page_same_filled one) be an optimization that
> zsmalloc does transparently for all its users?

Yea, that's a much needed functionality, but things may be "complicated".
We had that KSM-ish thing in the past in zram. Very briefly as we quickly
found out that the idea was patented by some company in China and we couldn't
figure our if it was safe to land that code upstream. So we ended up dropping
the patches.

https://lore.kernel.org/lkml/1494556204-25796-1-git-send-email-iamjoonsoo.kim@lge.com/

> Would it make sense to hook this up to a shrinker?

Hmm, ratelimited perhaps. We most likely don't want to scan the whole pool
every time a shrinker calls us (which can be quite often).


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-22  3:00   ` Sergey Senozhatsky
@ 2022-11-22  3:07     ` Sergey Senozhatsky
  2022-11-22 12:14       ` Aleksey Romanov
  0 siblings, 1 reply; 18+ messages in thread
From: Sergey Senozhatsky @ 2022-11-22  3:07 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Alexey Romanov, minchan, ngupta, akpm, linux-mm, linux-kernel,
	kernel, ddrokosov, Sergey Senozhatsky

On (22/11/22 12:00), Sergey Senozhatsky wrote:
> On (22/11/21 15:44), Johannes Weiner wrote:
> > This looks pretty great.
> > 
> > However, I'm curious why it's specific to zram, and not part of
> > zsmalloc? That way zswap would benefit as well, without having to
> > duplicate the implementation. This happened for example with
> > page_same_filled() and zswap_is_page_same_filled().
> > 
> > It's zsmalloc's job to store content efficiently, so couldn't this
> > feature (just like the page_same_filled one) be an optimization that
> > zsmalloc does transparently for all its users?
> 
> Yea, that's a much needed functionality, but things may be "complicated".
> We had that KSM-ish thing in the past in zram. Very briefly as we quickly
> found out that the idea was patented by some company in China and we couldn't
> figure our if it was safe to land that code upstream. So we ended up dropping
> the patches.
> 
> https://lore.kernel.org/lkml/1494556204-25796-1-git-send-email-iamjoonsoo.kim@lge.com/

IIRC that was patent in question:

https://patentimages.storage.googleapis.com/e2/66/9e/0ddbfae5c182ac/US9977598.pdf


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-22  3:07     ` Sergey Senozhatsky
@ 2022-11-22 12:14       ` Aleksey Romanov
  2022-11-23  4:13         ` Sergey Senozhatsky
  0 siblings, 1 reply; 18+ messages in thread
From: Aleksey Romanov @ 2022-11-22 12:14 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Johannes Weiner, minchan, ngupta, akpm, linux-mm, linux-kernel,
	kernel, Dmitry Rokosov

Hello!

On Tue, Nov 22, 2022 at 12:07:42PM +0900, Sergey Senozhatsky wrote:
> On (22/11/22 12:00), Sergey Senozhatsky wrote:
> > On (22/11/21 15:44), Johannes Weiner wrote:
> > > This looks pretty great.
> > > 
> > > However, I'm curious why it's specific to zram, and not part of
> > > zsmalloc? That way zswap would benefit as well, without having to
> > > duplicate the implementation. This happened for example with
> > > page_same_filled() and zswap_is_page_same_filled().
> > > 
> > > It's zsmalloc's job to store content efficiently, so couldn't this
> > > feature (just like the page_same_filled one) be an optimization that
> > > zsmalloc does transparently for all its users?
> > 
> > Yea, that's a much needed functionality, but things may be "complicated".
> > We had that KSM-ish thing in the past in zram. Very briefly as we quickly
> > found out that the idea was patented by some company in China and we couldn't
> > figure our if it was safe to land that code upstream. So we ended up dropping
> > the patches.
> > 
> > https://lore.kernel.org/lkml/1494556204-25796-1-git-send-email-iamjoonsoo.kim@lge.com/
> 
> IIRC that was patent in question:
> 
> https://patentimages.storage.googleapis.com/e2/66/9e/0ddbfae5c182ac/US9977598.pdf

I think the patent is talking about "mapping the virtual address" (like
in KSM). But zram works with the "handle" abstraction, which is a boxed
pointer to the required object. I think my implementation and the patent
is slightly different. 

Also, the patent speaks of "compressing" pages. In this case, we can add
zs_merge() function (like zs_compact()), that is, remove the merge logic
at the allocator level. zsmalloc doesn't say anything about what objects
it can work with. Implementation at the zsmalloc level is possible,
though more complicated that at the zram level. 

I believe that we can implement at least one of the options I proposed.

What do you think?

-- 
Thank you,
Alexey

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-22 12:14       ` Aleksey Romanov
@ 2022-11-23  4:13         ` Sergey Senozhatsky
  2022-11-23  8:53           ` Dmitry Rokosov
  2022-11-23  9:07           ` Aleksey Romanov
  0 siblings, 2 replies; 18+ messages in thread
From: Sergey Senozhatsky @ 2022-11-23  4:13 UTC (permalink / raw)
  To: Aleksey Romanov
  Cc: Sergey Senozhatsky, Johannes Weiner, minchan, ngupta, akpm,
	linux-mm, linux-kernel, kernel, Dmitry Rokosov

On (22/11/22 12:14), Aleksey Romanov wrote:
> > IIRC that was patent in question:
> > 
> > https://patentimages.storage.googleapis.com/e2/66/9e/0ddbfae5c182ac/US9977598.pdf
> 
> I think the patent is talking about "mapping the virtual address" (like
> in KSM). But zram works with the "handle" abstraction, which is a boxed
> pointer to the required object. I think my implementation and the patent
> is slightly different. 
> 
> Also, the patent speaks of "compressing" pages. In this case, we can add
> zs_merge() function (like zs_compact()), that is, remove the merge logic
> at the allocator level. zsmalloc doesn't say anything about what objects
> it can work with. Implementation at the zsmalloc level is possible,
> though more complicated that at the zram level. 
> 
> I believe that we can implement at least one of the options I proposed.
> 
> What do you think?

Oh, yeah, I'm not saying that we cannot have something like that
in zram/zsmalloc, just wanted to give some historical retrospective
on this and point at some implementation details that should be
considered.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 1/4] zram: introduce merge identical pages mechanism
  2022-11-21 19:00 ` [RFC PATCH v1 1/4] zram: introduce " Alexey Romanov
@ 2022-11-23  8:25   ` Chen Wandun
  2022-11-23  9:04     ` Aleksey Romanov
  0 siblings, 1 reply; 18+ messages in thread
From: Chen Wandun @ 2022-11-23  8:25 UTC (permalink / raw)
  To: Alexey Romanov, minchan, senozhatsky, ngupta, akpm
  Cc: linux-mm, linux-kernel, kernel, ddrokosov



在 2022/11/22 3:00, Alexey Romanov 写道:
> zram maps each page (struct page) to a zsmalloc object that
> contains a compressed buffer of that page's data. This fact
> generates data redundancy: for example, two identical pages
> will be store in compressed form in zsmalloc allocator twice.
>
> This patch adds a mechanism to scan zram_table_entry array
> and frees all identical objects in zsmalloc allocator, leaving
> only one. All zram_table_entry elements which reference this
> freed objects now refer to the same, not freed, object.
>
> To implement this mechanism, we sequentially scan the
> zram_table_entry array, counting the hash from the contents of
> the compressed pages (zsmalloc objects) and enter the index of
> the object into the hash table (hlist_head). If the hash matches,
> we remove the identical zsmalloc (zs_free) objects and update
> the link rbtree.
>
> Also, we can't just call zs_free() function during zram_free_page().
> Before calling this function we have to make sure that no one else
> refers to this zsmalloc object.
>
> To implement the mechanism for merging identical compressed
> pages (zsmalloc objects), a rbtree is needed.
>
> The tree node key is a reference to the zsmalloc object
> (handle), and the value is the number of references to
> this object (atomic counter). This is necessary for data
> consistency so that we do not zs_free the object referenced
> by any element of the zram_table_entry array.
>
> Signed-off-by: Alexey Romanov <avromanov@sberdevices.ru>
> ---
>   drivers/block/zram/zram_drv.c | 278 +++++++++++++++++++++++++++++++++-
>   drivers/block/zram/zram_drv.h |   6 +
>   2 files changed, 283 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index e290d6d97047..716c2f72805e 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -33,12 +33,15 @@
>   #include <linux/debugfs.h>
>   #include <linux/cpuhotplug.h>
>   #include <linux/part_stat.h>
> +#include <linux/hashtable.h>
> +#include <linux/xxhash.h>
>   
>   #include "zram_drv.h"
>   
>   static DEFINE_IDR(zram_index_idr);
>   /* idr index must be protected */
>   static DEFINE_MUTEX(zram_index_mutex);
> +static DEFINE_MUTEX(zram_rbtree_mutex);
>   
>   static int zram_major;
>   static const char *default_compressor = CONFIG_ZRAM_DEF_COMP;
> @@ -57,6 +60,16 @@ static void zram_free_page(struct zram *zram, size_t index);
>   static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
>   				u32 index, int offset, struct bio *bio);
>   
> +struct zram_rbtree_node {
> +	struct rb_node node;
> +	unsigned long key;
> +	unsigned long cnt;
> +};
> +
> +struct zram_hash_node {
> +	unsigned long index;
> +	struct hlist_node next;
> +};
>   
>   static int zram_slot_trylock(struct zram *zram, u32 index)
>   {
> @@ -1283,6 +1296,247 @@ static DEVICE_ATTR_RO(bd_stat);
>   #endif
>   static DEVICE_ATTR_RO(debug_stat);
>   
> +static bool zram_rbtree_insert(struct rb_root *root, struct zram_rbtree_node *data)
> +{
> +	struct rb_node **new = &(root->rb_node), *parent = NULL;
> +	struct zram_rbtree_node *this;
> +
> +	while (*new) {
> +		this = rb_entry(*new, struct zram_rbtree_node, node);
> +		parent = *new;
> +		if (data->key < this->key)
> +			new = &((*new)->rb_left);
> +		else if (data->key > this->key)
> +			new = &((*new)->rb_right);
> +		else
> +			return false;
> +	}
> +
> +	rb_link_node(&data->node, parent, new);
> +	rb_insert_color(&data->node, root);
> +	return true;
> +}
> +
> +static struct zram_rbtree_node *zram_rbtree_search(struct rb_root *root,
> +		unsigned long key)
> +{
> +	struct rb_node *node = root->rb_node;
> +	struct zram_rbtree_node *data;
> +
> +	while (node) {
> +		data = rb_entry(node, struct zram_rbtree_node, node);
> +		if (key < data->key)
> +			node = node->rb_left;
> +		else if (key > data->key)
> +			node = node->rb_right;
> +		else
> +			return data;
> +	}
> +
> +	return NULL;
> +}
> +
> +static unsigned long zram_calc_hash(void *src, size_t len)
> +{
> +	return xxhash(src, len, 0);
> +}
> +
> +static int zram_cmp_obj_and_merge(struct zram *zram, struct hlist_head *htable,
> +		size_t htable_size, size_t index)
> +{
> +	struct zram_rbtree_node *rb_node;
> +	struct zram_hash_node *node;
> +	unsigned long handle, cur_handle;
> +	size_t obj_size;
> +	char *src, *buf;
> +	unsigned long hash;
> +	int ret = 0;
> +
> +	handle = zram_get_handle(zram, index);
> +	if (!handle)
> +		return ret;
> +
> +	obj_size = zram_get_obj_size(zram, index);
> +	buf = kmalloc(obj_size, GFP_KERNEL);
> +	if (!buf) {
> +		pr_err("Failed to allocate zs_map_object buffer\n");
> +		return -ENOMEM;
> +	}
> +
> +	src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
> +	memcpy(buf, src, obj_size);
> +	zs_unmap_object(zram->mem_pool, handle);
> +	hash = zram_calc_hash(buf, obj_size);
> +
> +	mutex_lock(&zram_rbtree_mutex);
> +	hlist_for_each_entry(node, &htable[hash % htable_size], next) {
> +		int cmp;
> +
> +		zram_slot_lock(zram, node->index);
> +
> +		/*
> +		 * Page may change as the hash table is being formed,
> +		 * so the checks below are necessary.
> +		 */
> +		cur_handle = zram_get_handle(zram, node->index);
> +		if (handle == cur_handle ||
> +			obj_size != zram_get_obj_size(zram, node->index)) {
> +			zram_slot_unlock(zram, node->index);
> +			continue;
> +		}
> +
> +		src = zs_map_object(zram->mem_pool, cur_handle, ZS_MM_RO);
> +		cmp = memcmp(buf, src, obj_size);
> +		zs_unmap_object(zram->mem_pool, cur_handle);
> +
> +		if (!cmp) {
> +			rb_node = zram_rbtree_search(&zram->sph_rbtree, handle);
> +
> +			/*
> +			 * This check is necessary in order not to zs_free an object
> +			 * that someone already refers to. This situation is possible
> +			 * when with repeated calls to zram_do_scan(). For example:
> +			 *
> +			 * [slot0] [slot1] [slot2] [slot3] [slot4]
> +			 * [obj0]  [obj1]  [obj2]  [obj3]  [obj4]
> +			 *
> +			 * Let's imagine that obj2 and obj3 are equal, and we called
> +			 * zram_do_scan() function:
> +			 *
> +			 * [slot0] [slot1] [slot2] [slot3] [slot4]
> +			 * [obj0]  [obj1]  [obj2]  [obj2]  [obj4]
> +			 *
> +			 * Now, slot2 and slot3 refers to obj2 zsmalloc object.
> +			 * Time passed, now slot0 refres to obj0_n, which is equal
> +			 * to obj2:
> +			 *
> +			 * [slot0]  [slot1] [slot2] [slot3] [slot4]
> +			 * [obj0_n] [obj1]  [obj2]  [obj2]  [obj4]
> +			 *
> +			 * Now we call zram_do_scan() function again. We get to slot2,
> +			 * and we understand that obj2 and obj0_n hashes are the same. We
> +			 * try to zs_free(obj2), but slot3 also already refers to it.
> +			 *
> +			 * This is not correct!
> +			 */
In this case, it seem like can't merge all same object, is that right?

how about let slot2 point to obj0_n and decrease the rb_node ref of 
slot2/slot3 in the first loop,
let slot3 point to obj0_n and decrease the rb_node ref in the next 
loop,  then the obj2 can be free.
> +			if (unlikely(rb_node))
> +				if (rb_node->cnt > 1) {
> +					zram_slot_unlock(zram, node->index);
> +					continue;
> +				}
> +
> +			zram_set_handle(zram, index, cur_handle);
> +			zs_free(zram->mem_pool, handle);
> +
> +			rb_node = zram_rbtree_search(&zram->sph_rbtree, cur_handle);
> +
> +			if (!rb_node) {
> +				rb_node = kzalloc(sizeof(struct zram_rbtree_node),
> +								GFP_KERNEL);
> +				if (!rb_node) {
> +					pr_err("Failed to allocate rb_node\n");
> +					ret = -ENOMEM;
> +					zram_slot_unlock(zram, node->index);
> +					mutex_unlock(&zram_rbtree_mutex);
> +					goto merged_or_err;
> +				}
> +
> +				rb_node->key = cur_handle;
> +				/* Two slots refers to an zsmalloc object with cur_handle key */
> +				rb_node->cnt = 2;
> +				zram_rbtree_insert(&zram->sph_rbtree, rb_node);
> +			} else {
> +				rb_node->cnt++;
> +			}
> +
> +			atomic64_sub(obj_size, &zram->stats.compr_data_size);
> +			zram_set_flag(zram, index, ZRAM_MERGED);
> +			zram_set_flag(zram, node->index, ZRAM_MERGED);
> +
> +			zram_slot_unlock(zram, node->index);
> +			mutex_unlock(&zram_rbtree_mutex);
> +			goto merged_or_err;
> +		}
> +
> +		zram_slot_unlock(zram, node->index);
> +	}
> +
> +	mutex_unlock(&zram_rbtree_mutex);
> +
> +	node = kmalloc(sizeof(struct zram_hash_node), GFP_KERNEL);
> +	if (!node) {
> +		ret = -ENOMEM;
> +		goto merged_or_err;
> +	}
> +
> +	node->index = index;
> +	hlist_add_head(&node->next, &htable[hash % htable_size]);
> +
> +merged_or_err:
> +	kfree(buf);
> +	return ret;
> +}
> +
> +static void zram_free_htable_entries(struct hlist_head *htable,
> +		size_t htable_size)
> +{
> +	struct hlist_node *n;
> +	struct zram_hash_node *node;
> +
> +	hlist_for_each_entry_safe(node, n, htable, next) {
> +		hlist_del(&node->next);
> +		kfree(node);
> +	}
> +}
> +
> +static int zram_do_scan(struct zram *zram)
> +{
> +	size_t num_pages = zram->disksize >> PAGE_SHIFT;
> +	size_t htable_size = num_pages;
> +	size_t index;
> +	struct hlist_head *htable;
> +	int i, ret = 0;
> +
> +	htable = vzalloc(htable_size * sizeof(struct hlist_head));
> +	if (!htable) {
> +		pr_err("Failed to allocate hash table\n");
> +		return -ENOMEM;
> +	}
> +
> +	for (i = 0; i < htable_size; i++)
> +		INIT_HLIST_HEAD(&htable[i]);
> +
> +	for (index = 0; index < num_pages; index++) {
> +		zram_slot_lock(zram, index);
> +
> +		if (!zram_allocated(zram, index)) {
> +			zram_slot_unlock(zram, index);
> +			continue;
> +		}
> +
> +		if (zram_test_flag(zram, index, ZRAM_UNDER_WB) ||
> +			zram_test_flag(zram, index, ZRAM_WB) ||
> +			zram_test_flag(zram, index, ZRAM_SAME)) {
> +			zram_slot_unlock(zram, index);
> +			continue;
> +		}
> +
> +		/* Ignore pages that have been recompressed */
> +		if (zram_get_priority(zram, index) != 0)
> +			continue;
> +
> +		ret = zram_cmp_obj_and_merge(zram, htable, htable_size, index);
> +		zram_slot_unlock(zram, index);
> +		if (ret != 0)
> +			goto out;
> +	}
> +
> +out:
> +	zram_free_htable_entries(htable, htable_size);
> +	vfree(htable);
> +	return ret;
> +}
> +
>   static void zram_meta_free(struct zram *zram, u64 disksize)
>   {
>   	size_t num_pages = disksize >> PAGE_SHIFT;
> @@ -1324,6 +1578,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
>   static void zram_free_page(struct zram *zram, size_t index)
>   {
>   	unsigned long handle;
> +	struct zram_rbtree_node *node;
>   
>   #ifdef CONFIG_ZRAM_MEMORY_TRACKING
>   	zram->table[index].ac_time = 0;
> @@ -1361,7 +1616,26 @@ static void zram_free_page(struct zram *zram, size_t index)
>   	if (!handle)
>   		return;
>   
> -	zs_free(zram->mem_pool, handle);
> +	if (zram_test_flag(zram, index, ZRAM_MERGED)) {
> +		zram_clear_flag(zram, index, ZRAM_MERGED);
> +		mutex_lock(&zram_rbtree_mutex);
> +
> +		node = zram_rbtree_search(&zram->sph_rbtree, handle);
> +		BUG_ON(!node);
> +
> +		node->cnt--;
> +		if (node->cnt == 0) {
> +			rb_erase(&node->node, &zram->sph_rbtree);
> +			mutex_unlock(&zram_rbtree_mutex);
> +
> +			zs_free(zram->mem_pool, handle);
> +			kfree(node);
> +		} else {
> +			mutex_unlock(&zram_rbtree_mutex);
> +		}
> +	} else {
> +		zs_free(zram->mem_pool, handle);
> +	}
>   
>   	atomic64_sub(zram_get_obj_size(zram, index),
>   			&zram->stats.compr_data_size);
> @@ -2421,6 +2695,8 @@ static int zram_add(void)
>   
>   	comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
>   
> +	zram->sph_rbtree = RB_ROOT;
> +
>   	zram_debugfs_register(zram);
>   	pr_info("Added device: %s\n", zram->disk->disk_name);
>   	return device_id;
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index c5254626f051..4a7151c94523 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -56,6 +56,7 @@ enum zram_pageflags {
>   
>   	ZRAM_COMP_PRIORITY_BIT1, /* First bit of comp priority index */
>   	ZRAM_COMP_PRIORITY_BIT2, /* Second bit of comp priority index */
> +	ZRAM_MERGED,	/* page was merged */
>   
>   	__NR_ZRAM_PAGEFLAGS,
>   };
> @@ -140,5 +141,10 @@ struct zram {
>   #ifdef CONFIG_ZRAM_MEMORY_TRACKING
>   	struct dentry *debugfs_dir;
>   #endif
> +	/*
> +	 * This is same pages handle's rb tree, where the key is a handle
> +	 * to same pages and the value is a link counter
> +	 */
> +	struct rb_root sph_rbtree;
>   };
>   #endif



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-23  4:13         ` Sergey Senozhatsky
@ 2022-11-23  8:53           ` Dmitry Rokosov
  2022-12-01 10:14             ` Dmitry Rokosov
  2022-11-23  9:07           ` Aleksey Romanov
  1 sibling, 1 reply; 18+ messages in thread
From: Dmitry Rokosov @ 2022-11-23  8:53 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Aleksey Romanov, Johannes Weiner, minchan, ngupta, akpm,
	linux-mm, linux-kernel, kernel

Hello Sergey,

Thank you for your quick and detailed support! Here is my two cents
below.

On Wed, Nov 23, 2022 at 01:13:55PM +0900, Sergey Senozhatsky wrote:
> On (22/11/22 12:14), Aleksey Romanov wrote:
> > > IIRC that was patent in question:
> > > 
> > > https://patentimages.storage.googleapis.com/e2/66/9e/0ddbfae5c182ac/US9977598.pdf
> > 
> > I think the patent is talking about "mapping the virtual address" (like
> > in KSM). But zram works with the "handle" abstraction, which is a boxed
> > pointer to the required object. I think my implementation and the patent
> > is slightly different. 
> > 
> > Also, the patent speaks of "compressing" pages. In this case, we can add
> > zs_merge() function (like zs_compact()), that is, remove the merge logic
> > at the allocator level. zsmalloc doesn't say anything about what objects
> > it can work with. Implementation at the zsmalloc level is possible,
> > though more complicated that at the zram level. 
> > 
> > I believe that we can implement at least one of the options I proposed.
> > 
> > What do you think?
> 
> Oh, yeah, I'm not saying that we cannot have something like that
> in zram/zsmalloc, just wanted to give some historical retrospective
> on this and point at some implementation details that should be
> considered.

It's a very curious situation, I would say. I'm not so familiar with US
patent law, but I suppose it should be based on some keywords and
algorithms.

If we speak in terms of algorithm Alexey patch is different a little bit
from suggested in the patent paper. If we care about keywords, I think by
moving Alexey same page merging algorithm to zsmalloc we lose
"compressing" keyword, because zsmalloc operates with "objects" only,
doesn't matter if they are compressed or not.

Anyway, could you please suggest who can help to understand if it's safe
to use such same page merging algorithm in the upstream or not?
Maybe we can ask Linux Foundation lawyers to help us, just a guess.
I'm sure we shouldn't decline helpful features and optimization without
complete certainty about all restrictions.

-- 
Thank you,
Dmitry


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 1/4] zram: introduce merge identical pages mechanism
  2022-11-23  8:25   ` Chen Wandun
@ 2022-11-23  9:04     ` Aleksey Romanov
  0 siblings, 0 replies; 18+ messages in thread
From: Aleksey Romanov @ 2022-11-23  9:04 UTC (permalink / raw)
  To: Chen Wandun
  Cc: minchan, senozhatsky, ngupta, akpm, linux-mm, linux-kernel,
	kernel, Dmitry Rokosov

On Wed, Nov 23, 2022 at 04:25:00PM +0800, Chen Wandun wrote:
> 
> 
> 在 2022/11/22 3:00, Alexey Romanov 写道:
> > zram maps each page (struct page) to a zsmalloc object that
> > contains a compressed buffer of that page's data. This fact
> > generates data redundancy: for example, two identical pages
> > will be store in compressed form in zsmalloc allocator twice.
> > 
> > This patch adds a mechanism to scan zram_table_entry array
> > and frees all identical objects in zsmalloc allocator, leaving
> > only one. All zram_table_entry elements which reference this
> > freed objects now refer to the same, not freed, object.
> > 
> > To implement this mechanism, we sequentially scan the
> > zram_table_entry array, counting the hash from the contents of
> > the compressed pages (zsmalloc objects) and enter the index of
> > the object into the hash table (hlist_head). If the hash matches,
> > we remove the identical zsmalloc (zs_free) objects and update
> > the link rbtree.
> > 
> > Also, we can't just call zs_free() function during zram_free_page().
> > Before calling this function we have to make sure that no one else
> > refers to this zsmalloc object.
> > 
> > To implement the mechanism for merging identical compressed
> > pages (zsmalloc objects), a rbtree is needed.
> > 
> > The tree node key is a reference to the zsmalloc object
> > (handle), and the value is the number of references to
> > this object (atomic counter). This is necessary for data
> > consistency so that we do not zs_free the object referenced
> > by any element of the zram_table_entry array.
> > 
> > Signed-off-by: Alexey Romanov <avromanov@sberdevices.ru>
> > ---
> >   drivers/block/zram/zram_drv.c | 278 +++++++++++++++++++++++++++++++++-
> >   drivers/block/zram/zram_drv.h |   6 +
> >   2 files changed, 283 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index e290d6d97047..716c2f72805e 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -33,12 +33,15 @@
> >   #include <linux/debugfs.h>
> >   #include <linux/cpuhotplug.h>
> >   #include <linux/part_stat.h>
> > +#include <linux/hashtable.h>
> > +#include <linux/xxhash.h>
> >   #include "zram_drv.h"
> >   static DEFINE_IDR(zram_index_idr);
> >   /* idr index must be protected */
> >   static DEFINE_MUTEX(zram_index_mutex);
> > +static DEFINE_MUTEX(zram_rbtree_mutex);
> >   static int zram_major;
> >   static const char *default_compressor = CONFIG_ZRAM_DEF_COMP;
> > @@ -57,6 +60,16 @@ static void zram_free_page(struct zram *zram, size_t index);
> >   static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
> >   				u32 index, int offset, struct bio *bio);
> > +struct zram_rbtree_node {
> > +	struct rb_node node;
> > +	unsigned long key;
> > +	unsigned long cnt;
> > +};
> > +
> > +struct zram_hash_node {
> > +	unsigned long index;
> > +	struct hlist_node next;
> > +};
> >   static int zram_slot_trylock(struct zram *zram, u32 index)
> >   {
> > @@ -1283,6 +1296,247 @@ static DEVICE_ATTR_RO(bd_stat);
> >   #endif
> >   static DEVICE_ATTR_RO(debug_stat);
> > +static bool zram_rbtree_insert(struct rb_root *root, struct zram_rbtree_node *data)
> > +{
> > +	struct rb_node **new = &(root->rb_node), *parent = NULL;
> > +	struct zram_rbtree_node *this;
> > +
> > +	while (*new) {
> > +		this = rb_entry(*new, struct zram_rbtree_node, node);
> > +		parent = *new;
> > +		if (data->key < this->key)
> > +			new = &((*new)->rb_left);
> > +		else if (data->key > this->key)
> > +			new = &((*new)->rb_right);
> > +		else
> > +			return false;
> > +	}
> > +
> > +	rb_link_node(&data->node, parent, new);
> > +	rb_insert_color(&data->node, root);
> > +	return true;
> > +}
> > +
> > +static struct zram_rbtree_node *zram_rbtree_search(struct rb_root *root,
> > +		unsigned long key)
> > +{
> > +	struct rb_node *node = root->rb_node;
> > +	struct zram_rbtree_node *data;
> > +
> > +	while (node) {
> > +		data = rb_entry(node, struct zram_rbtree_node, node);
> > +		if (key < data->key)
> > +			node = node->rb_left;
> > +		else if (key > data->key)
> > +			node = node->rb_right;
> > +		else
> > +			return data;
> > +	}
> > +
> > +	return NULL;
> > +}
> > +
> > +static unsigned long zram_calc_hash(void *src, size_t len)
> > +{
> > +	return xxhash(src, len, 0);
> > +}
> > +
> > +static int zram_cmp_obj_and_merge(struct zram *zram, struct hlist_head *htable,
> > +		size_t htable_size, size_t index)
> > +{
> > +	struct zram_rbtree_node *rb_node;
> > +	struct zram_hash_node *node;
> > +	unsigned long handle, cur_handle;
> > +	size_t obj_size;
> > +	char *src, *buf;
> > +	unsigned long hash;
> > +	int ret = 0;
> > +
> > +	handle = zram_get_handle(zram, index);
> > +	if (!handle)
> > +		return ret;
> > +
> > +	obj_size = zram_get_obj_size(zram, index);
> > +	buf = kmalloc(obj_size, GFP_KERNEL);
> > +	if (!buf) {
> > +		pr_err("Failed to allocate zs_map_object buffer\n");
> > +		return -ENOMEM;
> > +	}
> > +
> > +	src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
> > +	memcpy(buf, src, obj_size);
> > +	zs_unmap_object(zram->mem_pool, handle);
> > +	hash = zram_calc_hash(buf, obj_size);
> > +
> > +	mutex_lock(&zram_rbtree_mutex);
> > +	hlist_for_each_entry(node, &htable[hash % htable_size], next) {
> > +		int cmp;
> > +
> > +		zram_slot_lock(zram, node->index);
> > +
> > +		/*
> > +		 * Page may change as the hash table is being formed,
> > +		 * so the checks below are necessary.
> > +		 */
> > +		cur_handle = zram_get_handle(zram, node->index);
> > +		if (handle == cur_handle ||
> > +			obj_size != zram_get_obj_size(zram, node->index)) {
> > +			zram_slot_unlock(zram, node->index);
> > +			continue;
> > +		}
> > +
> > +		src = zs_map_object(zram->mem_pool, cur_handle, ZS_MM_RO);
> > +		cmp = memcmp(buf, src, obj_size);
> > +		zs_unmap_object(zram->mem_pool, cur_handle);
> > +
> > +		if (!cmp) {
> > +			rb_node = zram_rbtree_search(&zram->sph_rbtree, handle);
> > +
> > +			/*
> > +			 * This check is necessary in order not to zs_free an object
> > +			 * that someone already refers to. This situation is possible
> > +			 * when with repeated calls to zram_do_scan(). For example:
> > +			 *
> > +			 * [slot0] [slot1] [slot2] [slot3] [slot4]
> > +			 * [obj0]  [obj1]  [obj2]  [obj3]  [obj4]
> > +			 *
> > +			 * Let's imagine that obj2 and obj3 are equal, and we called
> > +			 * zram_do_scan() function:
> > +			 *
> > +			 * [slot0] [slot1] [slot2] [slot3] [slot4]
> > +			 * [obj0]  [obj1]  [obj2]  [obj2]  [obj4]
> > +			 *
> > +			 * Now, slot2 and slot3 refers to obj2 zsmalloc object.
> > +			 * Time passed, now slot0 refres to obj0_n, which is equal
> > +			 * to obj2:
> > +			 *
> > +			 * [slot0]  [slot1] [slot2] [slot3] [slot4]
> > +			 * [obj0_n] [obj1]  [obj2]  [obj2]  [obj4]
> > +			 *
> > +			 * Now we call zram_do_scan() function again. We get to slot2,
> > +			 * and we understand that obj2 and obj0_n hashes are the same. We
> > +			 * try to zs_free(obj2), but slot3 also already refers to it.
> > +			 *
> > +			 * This is not correct!
> > +			 */
> In this case, it seem like can't merge all same object, is that right?

Yes.

> 
> how about let slot2 point to obj0_n and decrease the rb_node ref of
> slot2/slot3 in the first loop,
> let slot3 point to obj0_n and decrease the rb_node ref in the next loop, 
> then the obj2 can be free.

Sure. I will correct your remark in next series.

> > +			if (unlikely(rb_node))
> > +				if (rb_node->cnt > 1) {
> > +					zram_slot_unlock(zram, node->index);
> > +					continue;
> > +				}
> > +
> > +			zram_set_handle(zram, index, cur_handle);
> > +			zs_free(zram->mem_pool, handle);
> > +
> > +			rb_node = zram_rbtree_search(&zram->sph_rbtree, cur_handle);
> > +
> > +			if (!rb_node) {
> > +				rb_node = kzalloc(sizeof(struct zram_rbtree_node),
> > +								GFP_KERNEL);
> > +				if (!rb_node) {
> > +					pr_err("Failed to allocate rb_node\n");
> > +					ret = -ENOMEM;
> > +					zram_slot_unlock(zram, node->index);
> > +					mutex_unlock(&zram_rbtree_mutex);
> > +					goto merged_or_err;
> > +				}
> > +
> > +				rb_node->key = cur_handle;
> > +				/* Two slots refers to an zsmalloc object with cur_handle key */
> > +				rb_node->cnt = 2;
> > +				zram_rbtree_insert(&zram->sph_rbtree, rb_node);
> > +			} else {
> > +				rb_node->cnt++;
> > +			}
> > +
> > +			atomic64_sub(obj_size, &zram->stats.compr_data_size);
> > +			zram_set_flag(zram, index, ZRAM_MERGED);
> > +			zram_set_flag(zram, node->index, ZRAM_MERGED);
> > +
> > +			zram_slot_unlock(zram, node->index);
> > +			mutex_unlock(&zram_rbtree_mutex);
> > +			goto merged_or_err;
> > +		}
> > +
> > +		zram_slot_unlock(zram, node->index);
> > +	}
> > +
> > +	mutex_unlock(&zram_rbtree_mutex);
> > +
> > +	node = kmalloc(sizeof(struct zram_hash_node), GFP_KERNEL);
> > +	if (!node) {
> > +		ret = -ENOMEM;
> > +		goto merged_or_err;
> > +	}
> > +
> > +	node->index = index;
> > +	hlist_add_head(&node->next, &htable[hash % htable_size]);
> > +
> > +merged_or_err:
> > +	kfree(buf);
> > +	return ret;
> > +}
> > +
> > +static void zram_free_htable_entries(struct hlist_head *htable,
> > +		size_t htable_size)
> > +{
> > +	struct hlist_node *n;
> > +	struct zram_hash_node *node;
> > +
> > +	hlist_for_each_entry_safe(node, n, htable, next) {
> > +		hlist_del(&node->next);
> > +		kfree(node);
> > +	}
> > +}
> > +
> > +static int zram_do_scan(struct zram *zram)
> > +{
> > +	size_t num_pages = zram->disksize >> PAGE_SHIFT;
> > +	size_t htable_size = num_pages;
> > +	size_t index;
> > +	struct hlist_head *htable;
> > +	int i, ret = 0;
> > +
> > +	htable = vzalloc(htable_size * sizeof(struct hlist_head));
> > +	if (!htable) {
> > +		pr_err("Failed to allocate hash table\n");
> > +		return -ENOMEM;
> > +	}
> > +
> > +	for (i = 0; i < htable_size; i++)
> > +		INIT_HLIST_HEAD(&htable[i]);
> > +
> > +	for (index = 0; index < num_pages; index++) {
> > +		zram_slot_lock(zram, index);
> > +
> > +		if (!zram_allocated(zram, index)) {
> > +			zram_slot_unlock(zram, index);
> > +			continue;
> > +		}
> > +
> > +		if (zram_test_flag(zram, index, ZRAM_UNDER_WB) ||
> > +			zram_test_flag(zram, index, ZRAM_WB) ||
> > +			zram_test_flag(zram, index, ZRAM_SAME)) {
> > +			zram_slot_unlock(zram, index);
> > +			continue;
> > +		}
> > +
> > +		/* Ignore pages that have been recompressed */
> > +		if (zram_get_priority(zram, index) != 0)
> > +			continue;
> > +
> > +		ret = zram_cmp_obj_and_merge(zram, htable, htable_size, index);
> > +		zram_slot_unlock(zram, index);
> > +		if (ret != 0)
> > +			goto out;
> > +	}
> > +
> > +out:
> > +	zram_free_htable_entries(htable, htable_size);
> > +	vfree(htable);
> > +	return ret;
> > +}
> > +
> >   static void zram_meta_free(struct zram *zram, u64 disksize)
> >   {
> >   	size_t num_pages = disksize >> PAGE_SHIFT;
> > @@ -1324,6 +1578,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
> >   static void zram_free_page(struct zram *zram, size_t index)
> >   {
> >   	unsigned long handle;
> > +	struct zram_rbtree_node *node;
> >   #ifdef CONFIG_ZRAM_MEMORY_TRACKING
> >   	zram->table[index].ac_time = 0;
> > @@ -1361,7 +1616,26 @@ static void zram_free_page(struct zram *zram, size_t index)
> >   	if (!handle)
> >   		return;
> > -	zs_free(zram->mem_pool, handle);
> > +	if (zram_test_flag(zram, index, ZRAM_MERGED)) {
> > +		zram_clear_flag(zram, index, ZRAM_MERGED);
> > +		mutex_lock(&zram_rbtree_mutex);
> > +
> > +		node = zram_rbtree_search(&zram->sph_rbtree, handle);
> > +		BUG_ON(!node);
> > +
> > +		node->cnt--;
> > +		if (node->cnt == 0) {
> > +			rb_erase(&node->node, &zram->sph_rbtree);
> > +			mutex_unlock(&zram_rbtree_mutex);
> > +
> > +			zs_free(zram->mem_pool, handle);
> > +			kfree(node);
> > +		} else {
> > +			mutex_unlock(&zram_rbtree_mutex);
> > +		}
> > +	} else {
> > +		zs_free(zram->mem_pool, handle);
> > +	}
> >   	atomic64_sub(zram_get_obj_size(zram, index),
> >   			&zram->stats.compr_data_size);
> > @@ -2421,6 +2695,8 @@ static int zram_add(void)
> >   	comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
> > +	zram->sph_rbtree = RB_ROOT;
> > +
> >   	zram_debugfs_register(zram);
> >   	pr_info("Added device: %s\n", zram->disk->disk_name);
> >   	return device_id;
> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> > index c5254626f051..4a7151c94523 100644
> > --- a/drivers/block/zram/zram_drv.h
> > +++ b/drivers/block/zram/zram_drv.h
> > @@ -56,6 +56,7 @@ enum zram_pageflags {
> >   	ZRAM_COMP_PRIORITY_BIT1, /* First bit of comp priority index */
> >   	ZRAM_COMP_PRIORITY_BIT2, /* Second bit of comp priority index */
> > +	ZRAM_MERGED,	/* page was merged */
> >   	__NR_ZRAM_PAGEFLAGS,
> >   };
> > @@ -140,5 +141,10 @@ struct zram {
> >   #ifdef CONFIG_ZRAM_MEMORY_TRACKING
> >   	struct dentry *debugfs_dir;
> >   #endif
> > +	/*
> > +	 * This is same pages handle's rb tree, where the key is a handle
> > +	 * to same pages and the value is a link counter
> > +	 */
> > +	struct rb_root sph_rbtree;
> >   };
> >   #endif
> 

-- 
Thank you,
Alexey

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-23  4:13         ` Sergey Senozhatsky
  2022-11-23  8:53           ` Dmitry Rokosov
@ 2022-11-23  9:07           ` Aleksey Romanov
  1 sibling, 0 replies; 18+ messages in thread
From: Aleksey Romanov @ 2022-11-23  9:07 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Johannes Weiner, minchan, ngupta, akpm, linux-mm, linux-kernel,
	kernel, Dmitry Rokosov

On Wed, Nov 23, 2022 at 01:13:55PM +0900, Sergey Senozhatsky wrote:
> On (22/11/22 12:14), Aleksey Romanov wrote:
> > > IIRC that was patent in question:
> > > 
> > > https://patentimages.storage.googleapis.com/e2/66/9e/0ddbfae5c182ac/US9977598.pdf
> > 
> > I think the patent is talking about "mapping the virtual address" (like
> > in KSM). But zram works with the "handle" abstraction, which is a boxed
> > pointer to the required object. I think my implementation and the patent
> > is slightly different. 
> > 
> > Also, the patent speaks of "compressing" pages. In this case, we can add
> > zs_merge() function (like zs_compact()), that is, remove the merge logic
> > at the allocator level. zsmalloc doesn't say anything about what objects
> > it can work with. Implementation at the zsmalloc level is possible,
> > though more complicated that at the zram level. 
> > 
> > I believe that we can implement at least one of the options I proposed.
> > 
> > What do you think?
> 
> Oh, yeah, I'm not saying that we cannot have something like that
> in zram/zsmalloc, just wanted to give some historical retrospective
> on this and point at some implementation details that should be
> considered.

Okay, in this case, can you review my patches please?

-- 
Thank you,
Alexey

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-11-23  8:53           ` Dmitry Rokosov
@ 2022-12-01 10:14             ` Dmitry Rokosov
  2022-12-01 10:47               ` Sergey Senozhatsky
  0 siblings, 1 reply; 18+ messages in thread
From: Dmitry Rokosov @ 2022-12-01 10:14 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Aleksey Romanov, Johannes Weiner, minchan, ngupta, akpm,
	linux-mm, linux-kernel, kernel

Hello Sergey,

Hope you are doing well. Really sorry for the ping.

Did you get a chance to see the patch series, my questions, and
thoughts?

On Wed, Nov 23, 2022 at 11:53:06AM +0300, Dmitry Rokosov wrote:
> Hello Sergey,
> 
> Thank you for your quick and detailed support! Here is my two cents
> below.
> 
> On Wed, Nov 23, 2022 at 01:13:55PM +0900, Sergey Senozhatsky wrote:
> > On (22/11/22 12:14), Aleksey Romanov wrote:
> > > > IIRC that was patent in question:
> > > > 
> > > > https://patentimages.storage.googleapis.com/e2/66/9e/0ddbfae5c182ac/US9977598.pdf
> > > 
> > > I think the patent is talking about "mapping the virtual address" (like
> > > in KSM). But zram works with the "handle" abstraction, which is a boxed
> > > pointer to the required object. I think my implementation and the patent
> > > is slightly different. 
> > > 
> > > Also, the patent speaks of "compressing" pages. In this case, we can add
> > > zs_merge() function (like zs_compact()), that is, remove the merge logic
> > > at the allocator level. zsmalloc doesn't say anything about what objects
> > > it can work with. Implementation at the zsmalloc level is possible,
> > > though more complicated that at the zram level. 
> > > 
> > > I believe that we can implement at least one of the options I proposed.
> > > 
> > > What do you think?
> > 
> > Oh, yeah, I'm not saying that we cannot have something like that
> > in zram/zsmalloc, just wanted to give some historical retrospective
> > on this and point at some implementation details that should be
> > considered.
> 
> It's a very curious situation, I would say. I'm not so familiar with US
> patent law, but I suppose it should be based on some keywords and
> algorithms.
> 
> If we speak in terms of algorithm Alexey patch is different a little bit
> from suggested in the patent paper. If we care about keywords, I think by
> moving Alexey same page merging algorithm to zsmalloc we lose
> "compressing" keyword, because zsmalloc operates with "objects" only,
> doesn't matter if they are compressed or not.
> 
> Anyway, could you please suggest who can help to understand if it's safe
> to use such same page merging algorithm in the upstream or not?
> Maybe we can ask Linux Foundation lawyers to help us, just a guess.
> I'm sure we shouldn't decline helpful features and optimization without
> complete certainty about all restrictions.

-- 
Thank you,
Dmitry


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-12-01 10:14             ` Dmitry Rokosov
@ 2022-12-01 10:47               ` Sergey Senozhatsky
  2022-12-01 11:14                 ` Dmitry Rokosov
  2023-01-11 14:00                 ` Alexey Romanov
  0 siblings, 2 replies; 18+ messages in thread
From: Sergey Senozhatsky @ 2022-12-01 10:47 UTC (permalink / raw)
  To: Dmitry Rokosov
  Cc: Sergey Senozhatsky, Aleksey Romanov, Johannes Weiner, minchan,
	ngupta, akpm, linux-mm, linux-kernel, kernel

On (22/12/01 13:14), Dmitry Rokosov wrote:
> Hello Sergey,
> 
> Hope you are doing well. Really sorry for the ping.
> 
> Did you get a chance to see the patch series, my questions, and
> thoughts?

Hey,

Not really, sorry. It's a holidays season + pre-merge window week.
I probably will start looking attentively next week or so.

In the meantime time I'll try to reach out to lawyers to get more
clarifications on that patent thingy.


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-12-01 10:47               ` Sergey Senozhatsky
@ 2022-12-01 11:14                 ` Dmitry Rokosov
  2022-12-01 13:29                   ` Sergey Senozhatsky
  2023-01-11 14:00                 ` Alexey Romanov
  1 sibling, 1 reply; 18+ messages in thread
From: Dmitry Rokosov @ 2022-12-01 11:14 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Aleksey Romanov, Johannes Weiner, minchan, ngupta, akpm,
	linux-mm, linux-kernel, kernel

On Thu, Dec 01, 2022 at 07:47:21PM +0900, Sergey Senozhatsky wrote:
> On (22/12/01 13:14), Dmitry Rokosov wrote:
> > Hello Sergey,
> > 
> > Hope you are doing well. Really sorry for the ping.
> > 
> > Did you get a chance to see the patch series, my questions, and
> > thoughts?
> 
> Hey,
> 
> Not really, sorry. It's a holidays season + pre-merge window week.
> I probably will start looking attentively next week or so.
> 
> In the meantime time I'll try to reach out to lawyers to get more
> clarifications on that patent thingy.

Yeah, got it. Thanks a lot for the support! Appreciate it!

BTW, happy holidays :)

-- 
Thank you,
Dmitry


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-12-01 11:14                 ` Dmitry Rokosov
@ 2022-12-01 13:29                   ` Sergey Senozhatsky
  0 siblings, 0 replies; 18+ messages in thread
From: Sergey Senozhatsky @ 2022-12-01 13:29 UTC (permalink / raw)
  To: Dmitry Rokosov
  Cc: Sergey Senozhatsky, Aleksey Romanov, Johannes Weiner, minchan,
	ngupta, akpm, linux-mm, linux-kernel, kernel

On (22/12/01 14:14), Dmitry Rokosov wrote:
> 
> BTW, happy holidays :)

Happy holidays!


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2022-12-01 10:47               ` Sergey Senozhatsky
  2022-12-01 11:14                 ` Dmitry Rokosov
@ 2023-01-11 14:00                 ` Alexey Romanov
  2023-02-06 10:37                   ` Sergey Senozhatsky
  1 sibling, 1 reply; 18+ messages in thread
From: Alexey Romanov @ 2023-01-11 14:00 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Dmitry Rokosov, Johannes Weiner, minchan, ngupta, akpm, linux-mm,
	linux-kernel, kernel

Hello Sergey! 

On Thu, Dec 01, 2022 at 07:47:21PM +0900, Sergey Senozhatsky wrote:
> On (22/12/01 13:14), Dmitry Rokosov wrote:
> > Hello Sergey,
> > 
> > Hope you are doing well. Really sorry for the ping.
> > 
> > Did you get a chance to see the patch series, my questions, and
> > thoughts?
> 
> Hey,
> 
> Not really, sorry. It's a holidays season + pre-merge window week.
> I probably will start looking attentively next week or so.
> 
> In the meantime time I'll try to reach out to lawyers to get more
> clarifications on that patent thingy.

Is there any news about the patent, lawyers and my patchset?

-- 
Thank you,
Alexey

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [RFC PATCH v1 0/4] Introduce merge identical pages mechanism
  2023-01-11 14:00                 ` Alexey Romanov
@ 2023-02-06 10:37                   ` Sergey Senozhatsky
  0 siblings, 0 replies; 18+ messages in thread
From: Sergey Senozhatsky @ 2023-02-06 10:37 UTC (permalink / raw)
  To: Alexey Romanov
  Cc: Dmitry Rokosov, Johannes Weiner, minchan, ngupta, akpm, linux-mm,
	linux-kernel, kernel

Hi,

On Wed, Jan 11, 2023 at 11:00 PM Alexey Romanov
<AVRomanov@sberdevices.ru> wrote:
>
> Is there any news about the patent, lawyers and my patchset?

I'll get back to you once I have any updates.


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2023-02-06 10:38 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-21 19:00 [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Alexey Romanov
2022-11-21 19:00 ` [RFC PATCH v1 1/4] zram: introduce " Alexey Romanov
2022-11-23  8:25   ` Chen Wandun
2022-11-23  9:04     ` Aleksey Romanov
2022-11-21 19:00 ` [RFC PATCH v1 2/4] zram: add merge sysfs knob Alexey Romanov
2022-11-21 20:44 ` [RFC PATCH v1 0/4] Introduce merge identical pages mechanism Johannes Weiner
2022-11-22  3:00   ` Sergey Senozhatsky
2022-11-22  3:07     ` Sergey Senozhatsky
2022-11-22 12:14       ` Aleksey Romanov
2022-11-23  4:13         ` Sergey Senozhatsky
2022-11-23  8:53           ` Dmitry Rokosov
2022-12-01 10:14             ` Dmitry Rokosov
2022-12-01 10:47               ` Sergey Senozhatsky
2022-12-01 11:14                 ` Dmitry Rokosov
2022-12-01 13:29                   ` Sergey Senozhatsky
2023-01-11 14:00                 ` Alexey Romanov
2023-02-06 10:37                   ` Sergey Senozhatsky
2022-11-23  9:07           ` Aleksey Romanov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).