All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
@ 2015-09-26  8:04 ` Vitaly Wool
  0 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:04 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings
  Cc: Minchan Kim, Sergey Senozhatsky, linux-kernel, Linux-MM

Here comes the second iteration over zpool/zbud/zsmalloc API alignment. 
This time I divide it into three patches: for zpool, for zbud and for zsmalloc :)
Patches are non-intrusive and do not change any existing functionality. They only
add up stuff for the alignment purposes.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
@ 2015-09-26  8:04 ` Vitaly Wool
  0 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:04 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings
  Cc: Minchan Kim, Sergey Senozhatsky, linux-kernel, Linux-MM

Here comes the second iteration over zpool/zbud/zsmalloc API alignment. 
This time I divide it into three patches: for zpool, for zbud and for zsmalloc :)
Patches are non-intrusive and do not change any existing functionality. They only
add up stuff for the alignment purposes.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCHv2 1/3] zpool: add compaction api
  2015-09-26  8:04 ` Vitaly Wool
@ 2015-09-26  8:05   ` Vitaly Wool
  -1 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:05 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings
  Cc: Minchan Kim, Sergey Senozhatsky, linux-kernel, Linux-MM

This patch adds two functions to the zpool API: zpool_compact()
and zpool_get_num_compacted(). The former triggers compaction for
the underlying allocator and the latter retrieves the number of
pages migrated due to compaction for the whole time of this pool's
existence.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>

---
 include/linux/zpool.h |  9 +++++++++
 mm/zpool.c            | 23 +++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/include/linux/zpool.h b/include/linux/zpool.h
index 42f8ec9..be1ed58 100644
--- a/include/linux/zpool.h
+++ b/include/linux/zpool.h
@@ -58,6 +58,10 @@ void *zpool_map_handle(struct zpool *pool, unsigned long handle,
 
 void zpool_unmap_handle(struct zpool *pool, unsigned long handle);
 
+unsigned long zpool_compact(struct zpool *pool);
+
+unsigned long zpool_get_num_compacted(struct zpool *pool);
+
 u64 zpool_get_total_size(struct zpool *pool);
 
 
@@ -72,6 +76,8 @@ u64 zpool_get_total_size(struct zpool *pool);
  * @shrink:	shrink the pool.
  * @map:	map a handle.
  * @unmap:	unmap a handle.
+ * @compact:	try to run compaction over a pool
+ * @get_num_compacted:	get amount of compacted pages for a pool
  * @total_size:	get total size of a pool.
  *
  * This is created by a zpool implementation and registered
@@ -98,6 +104,9 @@ struct zpool_driver {
 				enum zpool_mapmode mm);
 	void (*unmap)(void *pool, unsigned long handle);
 
+	unsigned long (*compact)(void *pool);
+	unsigned long (*get_num_compacted)(void *pool);
+
 	u64 (*total_size)(void *pool);
 };
 
diff --git a/mm/zpool.c b/mm/zpool.c
index 8f670d3..e469a66 100644
--- a/mm/zpool.c
+++ b/mm/zpool.c
@@ -340,6 +340,29 @@ void zpool_unmap_handle(struct zpool *zpool, unsigned long handle)
 	zpool->driver->unmap(zpool->pool, handle);
 }
 
+ /**
+ * zpool_compact() - try to run compaction over zpool
+ * @pool       The zpool to compact
+ *
+ * Returns: the number of migrated pages
+ */
+unsigned long zpool_compact(struct zpool *zpool)
+{
+	return zpool->driver->compact(zpool->pool);
+}
+
+
+/**
+ * zpool_get_num_compacted() - get the number of migrated/compacted pages
+ * @stats	stats to fill in
+ *
+ * Returns: the total number of migrated pages for the pool
+ */
+unsigned long zpool_get_num_compacted(struct zpool *zpool)
+{
+	zpool->driver->get_num_compacted(zpool->pool);
+}
+
 /**
  * zpool_get_total_size() - The total size of the pool
  * @pool	The zpool to check
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCHv2 1/3] zpool: add compaction api
@ 2015-09-26  8:05   ` Vitaly Wool
  0 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:05 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings
  Cc: Minchan Kim, Sergey Senozhatsky, linux-kernel, Linux-MM

This patch adds two functions to the zpool API: zpool_compact()
and zpool_get_num_compacted(). The former triggers compaction for
the underlying allocator and the latter retrieves the number of
pages migrated due to compaction for the whole time of this pool's
existence.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>

---
 include/linux/zpool.h |  9 +++++++++
 mm/zpool.c            | 23 +++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/include/linux/zpool.h b/include/linux/zpool.h
index 42f8ec9..be1ed58 100644
--- a/include/linux/zpool.h
+++ b/include/linux/zpool.h
@@ -58,6 +58,10 @@ void *zpool_map_handle(struct zpool *pool, unsigned long handle,
 
 void zpool_unmap_handle(struct zpool *pool, unsigned long handle);
 
+unsigned long zpool_compact(struct zpool *pool);
+
+unsigned long zpool_get_num_compacted(struct zpool *pool);
+
 u64 zpool_get_total_size(struct zpool *pool);
 
 
@@ -72,6 +76,8 @@ u64 zpool_get_total_size(struct zpool *pool);
  * @shrink:	shrink the pool.
  * @map:	map a handle.
  * @unmap:	unmap a handle.
+ * @compact:	try to run compaction over a pool
+ * @get_num_compacted:	get amount of compacted pages for a pool
  * @total_size:	get total size of a pool.
  *
  * This is created by a zpool implementation and registered
@@ -98,6 +104,9 @@ struct zpool_driver {
 				enum zpool_mapmode mm);
 	void (*unmap)(void *pool, unsigned long handle);
 
+	unsigned long (*compact)(void *pool);
+	unsigned long (*get_num_compacted)(void *pool);
+
 	u64 (*total_size)(void *pool);
 };
 
diff --git a/mm/zpool.c b/mm/zpool.c
index 8f670d3..e469a66 100644
--- a/mm/zpool.c
+++ b/mm/zpool.c
@@ -340,6 +340,29 @@ void zpool_unmap_handle(struct zpool *zpool, unsigned long handle)
 	zpool->driver->unmap(zpool->pool, handle);
 }
 
+ /**
+ * zpool_compact() - try to run compaction over zpool
+ * @pool       The zpool to compact
+ *
+ * Returns: the number of migrated pages
+ */
+unsigned long zpool_compact(struct zpool *zpool)
+{
+	return zpool->driver->compact(zpool->pool);
+}
+
+
+/**
+ * zpool_get_num_compacted() - get the number of migrated/compacted pages
+ * @stats	stats to fill in
+ *
+ * Returns: the total number of migrated pages for the pool
+ */
+unsigned long zpool_get_num_compacted(struct zpool *zpool)
+{
+	zpool->driver->get_num_compacted(zpool->pool);
+}
+
 /**
  * zpool_get_total_size() - The total size of the pool
  * @pool	The zpool to check
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCHv2 2/3] zbud: add compaction callbacks
  2015-09-26  8:04 ` Vitaly Wool
@ 2015-09-26  8:07   ` Vitaly Wool
  -1 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:07 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings
  Cc: Minchan Kim, Sergey Senozhatsky, linux-kernel, Linux-MM

Add no-op compaction callbacks to zbud.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>
---
 mm/zbud.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/mm/zbud.c b/mm/zbud.c
index fa48bcdf..d67c0aa 100644
--- a/mm/zbud.c
+++ b/mm/zbud.c
@@ -195,6 +195,16 @@ static void zbud_zpool_unmap(void *pool, unsigned long handle)
 	zbud_unmap(pool, handle);
 }
 
+static unsigned long zbud_zpool_compact(void *pool)
+{
+	return 0;
+}
+
+static unsigned long zbud_zpool_get_compacted(void *pool)
+{
+       return 0;
+}
+
 static u64 zbud_zpool_total_size(void *pool)
 {
 	return zbud_get_pool_size(pool) * PAGE_SIZE;
@@ -210,6 +220,8 @@ static struct zpool_driver zbud_zpool_driver = {
 	.shrink =	zbud_zpool_shrink,
 	.map =		zbud_zpool_map,
 	.unmap =	zbud_zpool_unmap,
+	.compact =	zbud_zpool_compact,
+	.get_num_compacted =	zbud_zpool_get_compacted,
 	.total_size =	zbud_zpool_total_size,
 };
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCHv2 2/3] zbud: add compaction callbacks
@ 2015-09-26  8:07   ` Vitaly Wool
  0 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:07 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings
  Cc: Minchan Kim, Sergey Senozhatsky, linux-kernel, Linux-MM

Add no-op compaction callbacks to zbud.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>
---
 mm/zbud.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/mm/zbud.c b/mm/zbud.c
index fa48bcdf..d67c0aa 100644
--- a/mm/zbud.c
+++ b/mm/zbud.c
@@ -195,6 +195,16 @@ static void zbud_zpool_unmap(void *pool, unsigned long handle)
 	zbud_unmap(pool, handle);
 }
 
+static unsigned long zbud_zpool_compact(void *pool)
+{
+	return 0;
+}
+
+static unsigned long zbud_zpool_get_compacted(void *pool)
+{
+       return 0;
+}
+
 static u64 zbud_zpool_total_size(void *pool)
 {
 	return zbud_get_pool_size(pool) * PAGE_SIZE;
@@ -210,6 +220,8 @@ static struct zpool_driver zbud_zpool_driver = {
 	.shrink =	zbud_zpool_shrink,
 	.map =		zbud_zpool_map,
 	.unmap =	zbud_zpool_unmap,
+	.compact =	zbud_zpool_compact,
+	.get_num_compacted =	zbud_zpool_get_compacted,
 	.total_size =	zbud_zpool_total_size,
 };
 
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCHv2 3/3] zsmalloc: add compaction callbacks
  2015-09-26  8:04 ` Vitaly Wool
@ 2015-09-26  8:09   ` Vitaly Wool
  -1 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:09 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings, Minchan Kim, Sergey Senozhatsky
  Cc: linux-kernel, Linux-MM

Add compaction callbacks for zpool compaction API extension.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>
---
 mm/zsmalloc.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..8f2ddd1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -365,6 +365,19 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
 	zs_unmap_object(pool, handle);
 }
 
+static unsigned long zs_zpool_compact(void *pool)
+{
+	return zs_compact(pool);
+}
+
+static unsigned long zs_zpool_get_compacted(void *pool)
+{
+	struct zs_pool_stats stats;
+
+	zs_pool_stats(pool, &stats);
+	return stats.pages_compacted;
+}
+
 static u64 zs_zpool_total_size(void *pool)
 {
 	return zs_get_total_pages(pool) << PAGE_SHIFT;
@@ -380,6 +393,8 @@ static struct zpool_driver zs_zpool_driver = {
 	.shrink =	zs_zpool_shrink,
 	.map =		zs_zpool_map,
 	.unmap =	zs_zpool_unmap,
+	.compact =	zs_zpool_compact,
+	.get_num_compacted =	zs_zpool_get_compacted,
 	.total_size =	zs_zpool_total_size,
 };
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCHv2 3/3] zsmalloc: add compaction callbacks
@ 2015-09-26  8:09   ` Vitaly Wool
  0 siblings, 0 replies; 14+ messages in thread
From: Vitaly Wool @ 2015-09-26  8:09 UTC (permalink / raw)
  To: ddstreet, akpm, Seth Jennings, Minchan Kim, Sergey Senozhatsky
  Cc: linux-kernel, Linux-MM

Add compaction callbacks for zpool compaction API extension.

Signed-off-by: Vitaly Wool <vitalywool@gmail.com>
---
 mm/zsmalloc.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..8f2ddd1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -365,6 +365,19 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
 	zs_unmap_object(pool, handle);
 }
 
+static unsigned long zs_zpool_compact(void *pool)
+{
+	return zs_compact(pool);
+}
+
+static unsigned long zs_zpool_get_compacted(void *pool)
+{
+	struct zs_pool_stats stats;
+
+	zs_pool_stats(pool, &stats);
+	return stats.pages_compacted;
+}
+
 static u64 zs_zpool_total_size(void *pool)
 {
 	return zs_get_total_pages(pool) << PAGE_SHIFT;
@@ -380,6 +393,8 @@ static struct zpool_driver zs_zpool_driver = {
 	.shrink =	zs_zpool_shrink,
 	.map =		zs_zpool_map,
 	.unmap =	zs_zpool_unmap,
+	.compact =	zs_zpool_compact,
+	.get_num_compacted =	zs_zpool_get_compacted,
 	.total_size =	zs_zpool_total_size,
 };
 
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
  2015-09-26  8:04 ` Vitaly Wool
@ 2015-09-30  8:03   ` Minchan Kim
  -1 siblings, 0 replies; 14+ messages in thread
From: Minchan Kim @ 2015-09-30  8:03 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: ddstreet, akpm, Seth Jennings, Sergey Senozhatsky, linux-kernel,
	Linux-MM

Hello,

On Sat, Sep 26, 2015 at 10:04:01AM +0200, Vitaly Wool wrote:
> Here comes the second iteration over zpool/zbud/zsmalloc API alignment. 
> This time I divide it into three patches: for zpool, for zbud and for zsmalloc :)
> Patches are non-intrusive and do not change any existing functionality. They only
> add up stuff for the alignment purposes.

It exposes new API which needs justification in description.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
@ 2015-09-30  8:03   ` Minchan Kim
  0 siblings, 0 replies; 14+ messages in thread
From: Minchan Kim @ 2015-09-30  8:03 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: ddstreet, akpm, Seth Jennings, Sergey Senozhatsky, linux-kernel,
	Linux-MM

Hello,

On Sat, Sep 26, 2015 at 10:04:01AM +0200, Vitaly Wool wrote:
> Here comes the second iteration over zpool/zbud/zsmalloc API alignment. 
> This time I divide it into three patches: for zpool, for zbud and for zsmalloc :)
> Patches are non-intrusive and do not change any existing functionality. They only
> add up stuff for the alignment purposes.

It exposes new API which needs justification in description.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
  2015-09-26  8:04 ` Vitaly Wool
@ 2015-10-09 12:36   ` Dan Streetman
  -1 siblings, 0 replies; 14+ messages in thread
From: Dan Streetman @ 2015-10-09 12:36 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Andrew Morton, Seth Jennings, Minchan Kim, Sergey Senozhatsky,
	linux-kernel, Linux-MM

On Sat, Sep 26, 2015 at 4:04 AM, Vitaly Wool <vitalywool@gmail.com> wrote:
> Here comes the second iteration over zpool/zbud/zsmalloc API alignment.
> This time I divide it into three patches: for zpool, for zbud and for zsmalloc :)
> Patches are non-intrusive and do not change any existing functionality. They only
> add up stuff for the alignment purposes.

While I agree with what the patches do - simply change zram to use
zpool so zbud is an option - I also understand (I think) Minchan's
reluctance to ack them.  From his point of view, with zram, there is
no upside to using zpool; zsmalloc has better storage density than
zbud.  And without compaction, zbud and zsmalloc should be
*approximately* as deterministic, at least theoretically.  So changing
zram to use zpool only introduces a layer between zram and zsmalloc,
creating more work for him to update both zram and zsmalloc in the
future; adding a new feature/interface to zsmalloc will mean he also
has to update zpool and get additional ack's; he can add new features
to zram/zsmalloc now relatively uncontested.  I'm not trying to say
that's a bad thing, just that I can understand reluctance to add
complexity and additional future work, when in doubt of the benefit.
So I suspect you aren't going to get an ack from him until you show
him the specific benefit of using zbud, using hard data, and we all
figure out exactly why zbud is better in some situations.  And without
his ack, you're almost certainly not going to get zram or zsmalloc
patches in.

One thing I will say in general, and Seth has said this before too, is
that zbud is supposed to be the simple, reliable, consistent driver;
it won't get you the best storage efficiency, but it should be very
consistent in doing what it does, and it shouldn't change (much) from
one kernel version to another.  Opposed to that is zsmalloc, which is
supposed to get the best storage density, and is updated rather often
to get closer to that goal.  It has more knobs (at least internally),
and its code can change significantly from one kernel to another,
resulting in different (usually better) storage efficiency, but
possibly also resulting in less deterministic behavior, and more
processing overhead.  So I think even without hard numbers to compare
zbud and zsmalloc under zram, your general argument that some people
want stability over efficiency is not something to be dismissed; zbud
and zsmalloc clearly have different goals, and zsmalloc can't simply
be patched to be as consistent as zbud across kernel versions.  That's
not its goal; its goal is to achieve maximum storage efficiency.

Specifically regarding the determinism of each; obviously compaction
will have an impact, since it takes cpu cycles to do the compaction.
I don't know how much impact, but I think at minimum it would make
sense to add a module param to zsmalloc to allow disabling compaction.
But even without compaction, there is an important difference between
zbud and zsmalloc; zbud will never alloc more than 1 page when it
needs more storage, while zsmalloc will alloc between 1 and
ZS_MAX_PAGES_PER_ZSPAGE (currently 4) pages when it needs more
storage.  So in the worst case (if memory is tight and alloc_page()
takes a while), zsmalloc could take up to 4 times as long as zbud to
store a page.  Now, that should average out, where zsmalloc doesn't
need to alloc as many times as zbud (since it allocs more at once),
but on the small scale there will be less consistency of page storage
times with zsmalloc than zbud; at least, theoretically ;-)

I suggest you work with Minchan to find out what comparison data he
wants to see, to prove zbud is more stable/consistent under a certain
workload (and/or across kernel versions).

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
@ 2015-10-09 12:36   ` Dan Streetman
  0 siblings, 0 replies; 14+ messages in thread
From: Dan Streetman @ 2015-10-09 12:36 UTC (permalink / raw)
  To: Vitaly Wool
  Cc: Andrew Morton, Seth Jennings, Minchan Kim, Sergey Senozhatsky,
	linux-kernel, Linux-MM

On Sat, Sep 26, 2015 at 4:04 AM, Vitaly Wool <vitalywool@gmail.com> wrote:
> Here comes the second iteration over zpool/zbud/zsmalloc API alignment.
> This time I divide it into three patches: for zpool, for zbud and for zsmalloc :)
> Patches are non-intrusive and do not change any existing functionality. They only
> add up stuff for the alignment purposes.

While I agree with what the patches do - simply change zram to use
zpool so zbud is an option - I also understand (I think) Minchan's
reluctance to ack them.  From his point of view, with zram, there is
no upside to using zpool; zsmalloc has better storage density than
zbud.  And without compaction, zbud and zsmalloc should be
*approximately* as deterministic, at least theoretically.  So changing
zram to use zpool only introduces a layer between zram and zsmalloc,
creating more work for him to update both zram and zsmalloc in the
future; adding a new feature/interface to zsmalloc will mean he also
has to update zpool and get additional ack's; he can add new features
to zram/zsmalloc now relatively uncontested.  I'm not trying to say
that's a bad thing, just that I can understand reluctance to add
complexity and additional future work, when in doubt of the benefit.
So I suspect you aren't going to get an ack from him until you show
him the specific benefit of using zbud, using hard data, and we all
figure out exactly why zbud is better in some situations.  And without
his ack, you're almost certainly not going to get zram or zsmalloc
patches in.

One thing I will say in general, and Seth has said this before too, is
that zbud is supposed to be the simple, reliable, consistent driver;
it won't get you the best storage efficiency, but it should be very
consistent in doing what it does, and it shouldn't change (much) from
one kernel version to another.  Opposed to that is zsmalloc, which is
supposed to get the best storage density, and is updated rather often
to get closer to that goal.  It has more knobs (at least internally),
and its code can change significantly from one kernel to another,
resulting in different (usually better) storage efficiency, but
possibly also resulting in less deterministic behavior, and more
processing overhead.  So I think even without hard numbers to compare
zbud and zsmalloc under zram, your general argument that some people
want stability over efficiency is not something to be dismissed; zbud
and zsmalloc clearly have different goals, and zsmalloc can't simply
be patched to be as consistent as zbud across kernel versions.  That's
not its goal; its goal is to achieve maximum storage efficiency.

Specifically regarding the determinism of each; obviously compaction
will have an impact, since it takes cpu cycles to do the compaction.
I don't know how much impact, but I think at minimum it would make
sense to add a module param to zsmalloc to allow disabling compaction.
But even without compaction, there is an important difference between
zbud and zsmalloc; zbud will never alloc more than 1 page when it
needs more storage, while zsmalloc will alloc between 1 and
ZS_MAX_PAGES_PER_ZSPAGE (currently 4) pages when it needs more
storage.  So in the worst case (if memory is tight and alloc_page()
takes a while), zsmalloc could take up to 4 times as long as zbud to
store a page.  Now, that should average out, where zsmalloc doesn't
need to alloc as many times as zbud (since it allocs more at once),
but on the small scale there will be less consistency of page storage
times with zsmalloc than zbud; at least, theoretically ;-)

I suggest you work with Minchan to find out what comparison data he
wants to see, to prove zbud is more stable/consistent under a certain
workload (and/or across kernel versions).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
  2015-10-09 12:36   ` Dan Streetman
@ 2015-10-14  1:27     ` Sergey Senozhatsky
  -1 siblings, 0 replies; 14+ messages in thread
From: Sergey Senozhatsky @ 2015-10-14  1:27 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Vitaly Wool, Andrew Morton, Seth Jennings, Minchan Kim,
	Sergey Senozhatsky, linux-kernel, Linux-MM

Sorry for long reply.

On (10/09/15 08:36), Dan Streetman wrote:
[..]
> Specifically regarding the determinism of each; obviously compaction
> will have an impact, since it takes cpu cycles to do the compaction.
> I don't know how much impact, but I think at minimum it would make
> sense to add a module param to zsmalloc to allow disabling compaction.

Well, this was on my list of things TODO; and, BTW, this was *ONE OF*
the reason I added bool flag `->shrinker_enabled'.

static unsigned long zs_shrinker_count(struct shrinker *shrinker,
		struct shrink_control *sc)
{
	...
	if (!pool->shrinker_enabled)
		return 0;
	...
}

So, technically, it's easy. I'm not sure, though, that I want to export
this low level knob. It sort of makes sense, but at the same time a bit
tricky.

> But even without compaction, there is an important difference between
> zbud and zsmalloc; zbud will never alloc more than 1 page when it
> needs more storage, while zsmalloc will alloc between 1 and
> ZS_MAX_PAGES_PER_ZSPAGE (currently 4) pages when it needs more
> storage.  So in the worst case (if memory is tight and alloc_page()
> takes a while), zsmalloc could take up to 4 times as long as zbud to
> store a page.
>

hm... zsmalloc release zspage once it becomes empty, which happens:
a) when zspage receives 'final' zs_free() (no more objects in use)
   and turns into a ZS_EMPTY zspage
b) when compaction moves all of its object to other zspages and, thus,
   the zspage becomes ZS_EMPTY

And, basically, this `allocate ZS_MAX_PAGES_PER_ZSPAGE pages' penalty
hits (to some degree) us even if we are not so tight on memory.


So... *May be* it makes some sense to guarantee (well, via a special
knob) that each class has no less than N unused objects (hot-cache),
which may be (but not necessarily is) an equivalent of keeping M
ZS_EMPTY zspage(-s) in the class. IOW, avoid free_zspage() if that will
result in K alloc_page() shortly, simply because we end up having just
1 or 2 unused objects in the class.

I can understand that some workloads care less about memory efficiency.


Looks like I finally have more time this week so I'll try to take a
look why zsmalloc makes Vitaly so unhappy.

	-ss

> Now, that should average out, where zsmalloc doesn't
> need to alloc as many times as zbud (since it allocs more at once),
> but on the small scale there will be less consistency of page storage
> times with zsmalloc than zbud; at least, theoretically ;-)
> 
> I suggest you work with Minchan to find out what comparison data he
> wants to see, to prove zbud is more stable/consistent under a certain
> workload (and/or across kernel versions).
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api
@ 2015-10-14  1:27     ` Sergey Senozhatsky
  0 siblings, 0 replies; 14+ messages in thread
From: Sergey Senozhatsky @ 2015-10-14  1:27 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Vitaly Wool, Andrew Morton, Seth Jennings, Minchan Kim,
	Sergey Senozhatsky, linux-kernel, Linux-MM

Sorry for long reply.

On (10/09/15 08:36), Dan Streetman wrote:
[..]
> Specifically regarding the determinism of each; obviously compaction
> will have an impact, since it takes cpu cycles to do the compaction.
> I don't know how much impact, but I think at minimum it would make
> sense to add a module param to zsmalloc to allow disabling compaction.

Well, this was on my list of things TODO; and, BTW, this was *ONE OF*
the reason I added bool flag `->shrinker_enabled'.

static unsigned long zs_shrinker_count(struct shrinker *shrinker,
		struct shrink_control *sc)
{
	...
	if (!pool->shrinker_enabled)
		return 0;
	...
}

So, technically, it's easy. I'm not sure, though, that I want to export
this low level knob. It sort of makes sense, but at the same time a bit
tricky.

> But even without compaction, there is an important difference between
> zbud and zsmalloc; zbud will never alloc more than 1 page when it
> needs more storage, while zsmalloc will alloc between 1 and
> ZS_MAX_PAGES_PER_ZSPAGE (currently 4) pages when it needs more
> storage.  So in the worst case (if memory is tight and alloc_page()
> takes a while), zsmalloc could take up to 4 times as long as zbud to
> store a page.
>

hm... zsmalloc release zspage once it becomes empty, which happens:
a) when zspage receives 'final' zs_free() (no more objects in use)
   and turns into a ZS_EMPTY zspage
b) when compaction moves all of its object to other zspages and, thus,
   the zspage becomes ZS_EMPTY

And, basically, this `allocate ZS_MAX_PAGES_PER_ZSPAGE pages' penalty
hits (to some degree) us even if we are not so tight on memory.


So... *May be* it makes some sense to guarantee (well, via a special
knob) that each class has no less than N unused objects (hot-cache),
which may be (but not necessarily is) an equivalent of keeping M
ZS_EMPTY zspage(-s) in the class. IOW, avoid free_zspage() if that will
result in K alloc_page() shortly, simply because we end up having just
1 or 2 unused objects in the class.

I can understand that some workloads care less about memory efficiency.


Looks like I finally have more time this week so I'll try to take a
look why zsmalloc makes Vitaly so unhappy.

	-ss

> Now, that should average out, where zsmalloc doesn't
> need to alloc as many times as zbud (since it allocs more at once),
> but on the small scale there will be less consistency of page storage
> times with zsmalloc than zbud; at least, theoretically ;-)
> 
> I suggest you work with Minchan to find out what comparison data he
> wants to see, to prove zbud is more stable/consistent under a certain
> workload (and/or across kernel versions).
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-10-14  1:26 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-26  8:04 [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api Vitaly Wool
2015-09-26  8:04 ` Vitaly Wool
2015-09-26  8:05 ` [PATCHv2 1/3] zpool: add compaction api Vitaly Wool
2015-09-26  8:05   ` Vitaly Wool
2015-09-26  8:07 ` [PATCHv2 2/3] zbud: add compaction callbacks Vitaly Wool
2015-09-26  8:07   ` Vitaly Wool
2015-09-26  8:09 ` [PATCHv2 3/3] zsmalloc: " Vitaly Wool
2015-09-26  8:09   ` Vitaly Wool
2015-09-30  8:03 ` [PATCHv2 0/3] align zpool/zbud/zsmalloc on the api Minchan Kim
2015-09-30  8:03   ` Minchan Kim
2015-10-09 12:36 ` Dan Streetman
2015-10-09 12:36   ` Dan Streetman
2015-10-14  1:27   ` Sergey Senozhatsky
2015-10-14  1:27     ` Sergey Senozhatsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.