All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/4] zram memory control enhance
@ 2014-08-21  0:27 ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Currently, zram has no feature to limit memory so theoretically
zram can deplete system memory.
Users have asked for a limit several times as even without exhaustion
zram makes it hard to control memory usage of the platform.
This patchset adds the feature.

Patch 1 makes zs_get_total_size_bytes faster because it would be
used frequently in later patches for the new feature.

Patch 2 changes zs_get_total_size_bytes's return unit from bytes
to page so that zsmalloc doesn't need unnecessary operation(ie,
<< PAGE_SHIFT).

Patch 3 adds new feature. I added the feature into zram layer,
not zsmalloc because limiation is zram's requirement, not zsmalloc
so any other user using zsmalloc(ie, zpool) shouldn't affected
by unnecessary branch of zsmalloc. In future, if every users
of zsmalloc want the feature, then, we could move the feature
from client side to zsmalloc easily but vice versa would be
painful.

Patch 4 adds news facility to report maximum memory usage of zram
so that user can get how many of peak memory zram used easily
without extremely short inverval polling via
/sys/block/zram0/mem_used_total.

* From v2
 * Introduce helper funcntion to update max_used_pages
   for readability - David
 * Avoid unncessary zs_get_total_size call in updating loop
   for max_used_pages - David

* From v1
 * rebased on next-20140815
 * fix up race problem - David, Dan
 * reset mem_used_max as current total_bytes, rather than 0 - David
 * resetting works with only "0" write for extensiblilty - David, Dan

Minchan Kim (4):
  zsmalloc: move pages_allocated to zs_pool
  zsmalloc: change return value unit of zs_get_total_size_bytes
  zram: zram memory size limitation
  zram: report maximum used memory

 Documentation/ABI/testing/sysfs-block-zram | 19 ++++++
 Documentation/blockdev/zram.txt            | 21 +++++--
 drivers/block/zram/zram_drv.c              | 98 +++++++++++++++++++++++++++++-
 drivers/block/zram/zram_drv.h              |  6 ++
 include/linux/zsmalloc.h                   |  2 +-
 mm/zsmalloc.c                              | 38 ++++++------
 6 files changed, 159 insertions(+), 25 deletions(-)

-- 
2.0.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v3 0/4] zram memory control enhance
@ 2014-08-21  0:27 ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Currently, zram has no feature to limit memory so theoretically
zram can deplete system memory.
Users have asked for a limit several times as even without exhaustion
zram makes it hard to control memory usage of the platform.
This patchset adds the feature.

Patch 1 makes zs_get_total_size_bytes faster because it would be
used frequently in later patches for the new feature.

Patch 2 changes zs_get_total_size_bytes's return unit from bytes
to page so that zsmalloc doesn't need unnecessary operation(ie,
<< PAGE_SHIFT).

Patch 3 adds new feature. I added the feature into zram layer,
not zsmalloc because limiation is zram's requirement, not zsmalloc
so any other user using zsmalloc(ie, zpool) shouldn't affected
by unnecessary branch of zsmalloc. In future, if every users
of zsmalloc want the feature, then, we could move the feature
from client side to zsmalloc easily but vice versa would be
painful.

Patch 4 adds news facility to report maximum memory usage of zram
so that user can get how many of peak memory zram used easily
without extremely short inverval polling via
/sys/block/zram0/mem_used_total.

* From v2
 * Introduce helper funcntion to update max_used_pages
   for readability - David
 * Avoid unncessary zs_get_total_size call in updating loop
   for max_used_pages - David

* From v1
 * rebased on next-20140815
 * fix up race problem - David, Dan
 * reset mem_used_max as current total_bytes, rather than 0 - David
 * resetting works with only "0" write for extensiblilty - David, Dan

Minchan Kim (4):
  zsmalloc: move pages_allocated to zs_pool
  zsmalloc: change return value unit of zs_get_total_size_bytes
  zram: zram memory size limitation
  zram: report maximum used memory

 Documentation/ABI/testing/sysfs-block-zram | 19 ++++++
 Documentation/blockdev/zram.txt            | 21 +++++--
 drivers/block/zram/zram_drv.c              | 98 +++++++++++++++++++++++++++++-
 drivers/block/zram/zram_drv.h              |  6 ++
 include/linux/zsmalloc.h                   |  2 +-
 mm/zsmalloc.c                              | 38 ++++++------
 6 files changed, 159 insertions(+), 25 deletions(-)

-- 
2.0.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v3 1/4] zsmalloc: move pages_allocated to zs_pool
  2014-08-21  0:27 ` Minchan Kim
@ 2014-08-21  0:27   ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Pages_allocated has counted in size_class structure and when user
want to see total_size_bytes, it gathers all of value from each
size_class to report the sum.

It's not bad if user don't see the value often but if user start
to see the value frequently, it would be not a good deal for
performance POV.

This patch moves the variable from size_class to zs_pool so it would
reduce memory footprint (from [255 * 8byte] to [sizeof(atomic_t)])
but it adds new locking overhead but it wouldn't be severe because
it's not a hot path in zs_malloc(ie, it is called only when new
zspage is created, not a object).

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 94f38fac5e81..a65924255763 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -199,9 +199,6 @@ struct size_class {
 
 	spinlock_t lock;
 
-	/* stats */
-	u64 pages_allocated;
-
 	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
 };
 
@@ -217,9 +214,12 @@ struct link_free {
 };
 
 struct zs_pool {
+	spinlock_t stat_lock;
+
 	struct size_class size_class[ZS_SIZE_CLASSES];
 
 	gfp_t flags;	/* allocation flags used when growing pool */
+	unsigned long pages_allocated;
 };
 
 /*
@@ -967,6 +967,7 @@ struct zs_pool *zs_create_pool(gfp_t flags)
 
 	}
 
+	spin_lock_init(&pool->stat_lock);
 	pool->flags = flags;
 
 	return pool;
@@ -1028,8 +1029,10 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 			return 0;
 
 		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
+		spin_lock(&pool->stat_lock);
+		pool->pages_allocated += class->pages_per_zspage;
+		spin_unlock(&pool->stat_lock);
 		spin_lock(&class->lock);
-		class->pages_allocated += class->pages_per_zspage;
 	}
 
 	obj = (unsigned long)first_page->freelist;
@@ -1082,14 +1085,14 @@ void zs_free(struct zs_pool *pool, unsigned long obj)
 
 	first_page->inuse--;
 	fullness = fix_fullness_group(pool, first_page);
-
-	if (fullness == ZS_EMPTY)
-		class->pages_allocated -= class->pages_per_zspage;
-
 	spin_unlock(&class->lock);
 
-	if (fullness == ZS_EMPTY)
+	if (fullness == ZS_EMPTY) {
+		spin_lock(&pool->stat_lock);
+		pool->pages_allocated -= class->pages_per_zspage;
+		spin_unlock(&pool->stat_lock);
 		free_zspage(first_page);
+	}
 }
 EXPORT_SYMBOL_GPL(zs_free);
 
@@ -1185,12 +1188,11 @@ EXPORT_SYMBOL_GPL(zs_unmap_object);
 
 u64 zs_get_total_size_bytes(struct zs_pool *pool)
 {
-	int i;
-	u64 npages = 0;
-
-	for (i = 0; i < ZS_SIZE_CLASSES; i++)
-		npages += pool->size_class[i].pages_allocated;
+	u64 npages;
 
+	spin_lock(&pool->stat_lock);
+	npages = pool->pages_allocated;
+	spin_unlock(&pool->stat_lock);
 	return npages << PAGE_SHIFT;
 }
 EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 1/4] zsmalloc: move pages_allocated to zs_pool
@ 2014-08-21  0:27   ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Pages_allocated has counted in size_class structure and when user
want to see total_size_bytes, it gathers all of value from each
size_class to report the sum.

It's not bad if user don't see the value often but if user start
to see the value frequently, it would be not a good deal for
performance POV.

This patch moves the variable from size_class to zs_pool so it would
reduce memory footprint (from [255 * 8byte] to [sizeof(atomic_t)])
but it adds new locking overhead but it wouldn't be severe because
it's not a hot path in zs_malloc(ie, it is called only when new
zspage is created, not a object).

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/zsmalloc.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 94f38fac5e81..a65924255763 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -199,9 +199,6 @@ struct size_class {
 
 	spinlock_t lock;
 
-	/* stats */
-	u64 pages_allocated;
-
 	struct page *fullness_list[_ZS_NR_FULLNESS_GROUPS];
 };
 
@@ -217,9 +214,12 @@ struct link_free {
 };
 
 struct zs_pool {
+	spinlock_t stat_lock;
+
 	struct size_class size_class[ZS_SIZE_CLASSES];
 
 	gfp_t flags;	/* allocation flags used when growing pool */
+	unsigned long pages_allocated;
 };
 
 /*
@@ -967,6 +967,7 @@ struct zs_pool *zs_create_pool(gfp_t flags)
 
 	}
 
+	spin_lock_init(&pool->stat_lock);
 	pool->flags = flags;
 
 	return pool;
@@ -1028,8 +1029,10 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 			return 0;
 
 		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
+		spin_lock(&pool->stat_lock);
+		pool->pages_allocated += class->pages_per_zspage;
+		spin_unlock(&pool->stat_lock);
 		spin_lock(&class->lock);
-		class->pages_allocated += class->pages_per_zspage;
 	}
 
 	obj = (unsigned long)first_page->freelist;
@@ -1082,14 +1085,14 @@ void zs_free(struct zs_pool *pool, unsigned long obj)
 
 	first_page->inuse--;
 	fullness = fix_fullness_group(pool, first_page);
-
-	if (fullness == ZS_EMPTY)
-		class->pages_allocated -= class->pages_per_zspage;
-
 	spin_unlock(&class->lock);
 
-	if (fullness == ZS_EMPTY)
+	if (fullness == ZS_EMPTY) {
+		spin_lock(&pool->stat_lock);
+		pool->pages_allocated -= class->pages_per_zspage;
+		spin_unlock(&pool->stat_lock);
 		free_zspage(first_page);
+	}
 }
 EXPORT_SYMBOL_GPL(zs_free);
 
@@ -1185,12 +1188,11 @@ EXPORT_SYMBOL_GPL(zs_unmap_object);
 
 u64 zs_get_total_size_bytes(struct zs_pool *pool)
 {
-	int i;
-	u64 npages = 0;
-
-	for (i = 0; i < ZS_SIZE_CLASSES; i++)
-		npages += pool->size_class[i].pages_allocated;
+	u64 npages;
 
+	spin_lock(&pool->stat_lock);
+	npages = pool->pages_allocated;
+	spin_unlock(&pool->stat_lock);
 	return npages << PAGE_SHIFT;
 }
 EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
-- 
2.0.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
  2014-08-21  0:27 ` Minchan Kim
@ 2014-08-21  0:27   ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

zs_get_total_size_bytes returns a amount of memory zsmalloc
consumed with *byte unit* but zsmalloc operates *page unit*
rather than byte unit so let's change the API so benefit
we could get is that reduce unnecessary overhead
(ie, change page unit with byte unit) in zsmalloc.

Now, zswap can rollback to zswap_pool_pages.
Over to zswap guys ;-)

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/block/zram/zram_drv.c |  4 ++--
 include/linux/zsmalloc.h      |  2 +-
 mm/zsmalloc.c                 | 10 +++++-----
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index d00831c3d731..302dd37bcea3 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
 
 	down_read(&zram->init_lock);
 	if (init_done(zram))
-		val = zs_get_total_size_bytes(meta->mem_pool);
+		val = zs_get_total_size(meta->mem_pool);
 	up_read(&zram->init_lock);
 
-	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
 }
 
 static ssize_t max_comp_streams_show(struct device *dev,
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index e44d634e7fb7..105b56e45d23 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 			enum zs_mapmode mm);
 void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
 
-u64 zs_get_total_size_bytes(struct zs_pool *pool);
+unsigned long zs_get_total_size(struct zs_pool *pool);
 
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index a65924255763..80408a1da03a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
 
 static u64 zs_zpool_total_size(void *pool)
 {
-	return zs_get_total_size_bytes(pool);
+	return zs_get_total_size(pool) << PAGE_SHIFT;
 }
 
 static struct zpool_driver zs_zpool_driver = {
@@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
-u64 zs_get_total_size_bytes(struct zs_pool *pool)
+unsigned long zs_get_total_size(struct zs_pool *pool)
 {
-	u64 npages;
+	unsigned long npages;
 
 	spin_lock(&pool->stat_lock);
 	npages = pool->pages_allocated;
 	spin_unlock(&pool->stat_lock);
-	return npages << PAGE_SHIFT;
+	return npages;
 }
-EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
+EXPORT_SYMBOL_GPL(zs_get_total_size);
 
 module_init(zs_init);
 module_exit(zs_exit);
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
@ 2014-08-21  0:27   ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

zs_get_total_size_bytes returns a amount of memory zsmalloc
consumed with *byte unit* but zsmalloc operates *page unit*
rather than byte unit so let's change the API so benefit
we could get is that reduce unnecessary overhead
(ie, change page unit with byte unit) in zsmalloc.

Now, zswap can rollback to zswap_pool_pages.
Over to zswap guys ;-)

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/block/zram/zram_drv.c |  4 ++--
 include/linux/zsmalloc.h      |  2 +-
 mm/zsmalloc.c                 | 10 +++++-----
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index d00831c3d731..302dd37bcea3 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
 
 	down_read(&zram->init_lock);
 	if (init_done(zram))
-		val = zs_get_total_size_bytes(meta->mem_pool);
+		val = zs_get_total_size(meta->mem_pool);
 	up_read(&zram->init_lock);
 
-	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
 }
 
 static ssize_t max_comp_streams_show(struct device *dev,
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index e44d634e7fb7..105b56e45d23 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 			enum zs_mapmode mm);
 void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
 
-u64 zs_get_total_size_bytes(struct zs_pool *pool);
+unsigned long zs_get_total_size(struct zs_pool *pool);
 
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index a65924255763..80408a1da03a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
 
 static u64 zs_zpool_total_size(void *pool)
 {
-	return zs_get_total_size_bytes(pool);
+	return zs_get_total_size(pool) << PAGE_SHIFT;
 }
 
 static struct zpool_driver zs_zpool_driver = {
@@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
-u64 zs_get_total_size_bytes(struct zs_pool *pool)
+unsigned long zs_get_total_size(struct zs_pool *pool)
 {
-	u64 npages;
+	unsigned long npages;
 
 	spin_lock(&pool->stat_lock);
 	npages = pool->pages_allocated;
 	spin_unlock(&pool->stat_lock);
-	return npages << PAGE_SHIFT;
+	return npages;
 }
-EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
+EXPORT_SYMBOL_GPL(zs_get_total_size);
 
 module_init(zs_init);
 module_exit(zs_exit);
-- 
2.0.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 3/4] zram: zram memory size limitation
  2014-08-21  0:27 ` Minchan Kim
@ 2014-08-21  0:27   ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Since zram has no control feature to limit memory usage,
it makes hard to manage system memrory.

This patch adds new knob "mem_limit" via sysfs to set up the
a limit so that zram could fail allocation once it reaches
the limit.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 Documentation/ABI/testing/sysfs-block-zram |  9 +++++++
 Documentation/blockdev/zram.txt            | 20 ++++++++++++---
 drivers/block/zram/zram_drv.c              | 41 ++++++++++++++++++++++++++++++
 drivers/block/zram/zram_drv.h              |  5 ++++
 4 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
index 70ec992514d0..025331c19045 100644
--- a/Documentation/ABI/testing/sysfs-block-zram
+++ b/Documentation/ABI/testing/sysfs-block-zram
@@ -119,3 +119,12 @@ Description:
 		efficiency can be calculated using compr_data_size and this
 		statistic.
 		Unit: bytes
+
+What:		/sys/block/zram<id>/mem_limit
+Date:		August 2014
+Contact:	Minchan Kim <minchan@kernel.org>
+Description:
+		The mem_limit file is read/write and specifies the amount
+		of memory to be able to consume memory to store store
+		compressed data.
+		Unit: bytes
diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 0595c3f56ccf..9f239ff8c444 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -74,14 +74,26 @@ There is little point creating a zram of greater than twice the size of memory
 since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
 size of the disk when not in use so a huge zram is wasteful.
 
-5) Activate:
+5) Set memory limit: Optional
+	Set memory limit by writing the value to sysfs node 'mem_limit'.
+	The value can be either in bytes or you can use mem suffixes.
+	Examples:
+	    # limit /dev/zram0 with 50MB memory
+	    echo $((50*1024*1024)) > /sys/block/zram0/mem_limit
+
+	    # Using mem suffixes
+	    echo 256K > /sys/block/zram0/mem_limit
+	    echo 512M > /sys/block/zram0/mem_limit
+	    echo 1G > /sys/block/zram0/mem_limit
+
+6) Activate:
 	mkswap /dev/zram0
 	swapon /dev/zram0
 
 	mkfs.ext4 /dev/zram1
 	mount /dev/zram1 /tmp
 
-6) Stats:
+7) Stats:
 	Per-device statistics are exported as various nodes under
 	/sys/block/zram<id>/
 		disksize
@@ -96,11 +108,11 @@ size of the disk when not in use so a huge zram is wasteful.
 		compr_data_size
 		mem_used_total
 
-7) Deactivate:
+8) Deactivate:
 	swapoff /dev/zram0
 	umount /dev/zram1
 
-8) Reset:
+9) Reset:
 	Write any positive value to 'reset' sysfs node
 	echo 1 > /sys/block/zram0/reset
 	echo 1 > /sys/block/zram1/reset
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 302dd37bcea3..adc91c7ecaef 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -122,6 +122,33 @@ static ssize_t max_comp_streams_show(struct device *dev,
 	return scnprintf(buf, PAGE_SIZE, "%d\n", val);
 }
 
+static ssize_t mem_limit_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	u64 val;
+	struct zram *zram = dev_to_zram(dev);
+
+	down_read(&zram->init_lock);
+	val = zram->limit_pages;
+	up_read(&zram->init_lock);
+
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
+}
+
+static ssize_t mem_limit_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t len)
+{
+	u64 limit;
+	struct zram *zram = dev_to_zram(dev);
+
+	limit = memparse(buf, NULL);
+	down_write(&zram->init_lock);
+	zram->limit_pages = PAGE_ALIGN(limit) >> PAGE_SHIFT;
+	up_write(&zram->init_lock);
+
+	return len;
+}
+
 static ssize_t max_comp_streams_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
@@ -513,6 +540,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		ret = -ENOMEM;
 		goto out;
 	}
+
+	if (zram->limit_pages &&
+		zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
+		zs_free(meta->mem_pool, handle);
+		ret = -ENOMEM;
+		goto out;
+	}
+
 	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
 
 	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
@@ -617,6 +652,9 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity)
 	struct zram_meta *meta;
 
 	down_write(&zram->init_lock);
+
+	zram->limit_pages = 0;
+
 	if (!init_done(zram)) {
 		up_write(&zram->init_lock);
 		return;
@@ -857,6 +895,8 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
 static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
 static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
 static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
+static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
+		mem_limit_store);
 static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
 		max_comp_streams_show, max_comp_streams_store);
 static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
@@ -885,6 +925,7 @@ static struct attribute *zram_disk_attrs[] = {
 	&dev_attr_orig_data_size.attr,
 	&dev_attr_compr_data_size.attr,
 	&dev_attr_mem_used_total.attr,
+	&dev_attr_mem_limit.attr,
 	&dev_attr_max_comp_streams.attr,
 	&dev_attr_comp_algorithm.attr,
 	NULL,
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index e0f725c87cc6..b7aa9c21553f 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -112,6 +112,11 @@ struct zram {
 	u64 disksize;	/* bytes */
 	int max_comp_streams;
 	struct zram_stats stats;
+	/*
+	 * the number of pages zram can consume for storing compressed data
+	 */
+	unsigned long limit_pages;
+
 	char compressor[10];
 };
 #endif
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 3/4] zram: zram memory size limitation
@ 2014-08-21  0:27   ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Since zram has no control feature to limit memory usage,
it makes hard to manage system memrory.

This patch adds new knob "mem_limit" via sysfs to set up the
a limit so that zram could fail allocation once it reaches
the limit.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 Documentation/ABI/testing/sysfs-block-zram |  9 +++++++
 Documentation/blockdev/zram.txt            | 20 ++++++++++++---
 drivers/block/zram/zram_drv.c              | 41 ++++++++++++++++++++++++++++++
 drivers/block/zram/zram_drv.h              |  5 ++++
 4 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
index 70ec992514d0..025331c19045 100644
--- a/Documentation/ABI/testing/sysfs-block-zram
+++ b/Documentation/ABI/testing/sysfs-block-zram
@@ -119,3 +119,12 @@ Description:
 		efficiency can be calculated using compr_data_size and this
 		statistic.
 		Unit: bytes
+
+What:		/sys/block/zram<id>/mem_limit
+Date:		August 2014
+Contact:	Minchan Kim <minchan@kernel.org>
+Description:
+		The mem_limit file is read/write and specifies the amount
+		of memory to be able to consume memory to store store
+		compressed data.
+		Unit: bytes
diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 0595c3f56ccf..9f239ff8c444 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -74,14 +74,26 @@ There is little point creating a zram of greater than twice the size of memory
 since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
 size of the disk when not in use so a huge zram is wasteful.
 
-5) Activate:
+5) Set memory limit: Optional
+	Set memory limit by writing the value to sysfs node 'mem_limit'.
+	The value can be either in bytes or you can use mem suffixes.
+	Examples:
+	    # limit /dev/zram0 with 50MB memory
+	    echo $((50*1024*1024)) > /sys/block/zram0/mem_limit
+
+	    # Using mem suffixes
+	    echo 256K > /sys/block/zram0/mem_limit
+	    echo 512M > /sys/block/zram0/mem_limit
+	    echo 1G > /sys/block/zram0/mem_limit
+
+6) Activate:
 	mkswap /dev/zram0
 	swapon /dev/zram0
 
 	mkfs.ext4 /dev/zram1
 	mount /dev/zram1 /tmp
 
-6) Stats:
+7) Stats:
 	Per-device statistics are exported as various nodes under
 	/sys/block/zram<id>/
 		disksize
@@ -96,11 +108,11 @@ size of the disk when not in use so a huge zram is wasteful.
 		compr_data_size
 		mem_used_total
 
-7) Deactivate:
+8) Deactivate:
 	swapoff /dev/zram0
 	umount /dev/zram1
 
-8) Reset:
+9) Reset:
 	Write any positive value to 'reset' sysfs node
 	echo 1 > /sys/block/zram0/reset
 	echo 1 > /sys/block/zram1/reset
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 302dd37bcea3..adc91c7ecaef 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -122,6 +122,33 @@ static ssize_t max_comp_streams_show(struct device *dev,
 	return scnprintf(buf, PAGE_SIZE, "%d\n", val);
 }
 
+static ssize_t mem_limit_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	u64 val;
+	struct zram *zram = dev_to_zram(dev);
+
+	down_read(&zram->init_lock);
+	val = zram->limit_pages;
+	up_read(&zram->init_lock);
+
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
+}
+
+static ssize_t mem_limit_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t len)
+{
+	u64 limit;
+	struct zram *zram = dev_to_zram(dev);
+
+	limit = memparse(buf, NULL);
+	down_write(&zram->init_lock);
+	zram->limit_pages = PAGE_ALIGN(limit) >> PAGE_SHIFT;
+	up_write(&zram->init_lock);
+
+	return len;
+}
+
 static ssize_t max_comp_streams_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
@@ -513,6 +540,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		ret = -ENOMEM;
 		goto out;
 	}
+
+	if (zram->limit_pages &&
+		zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
+		zs_free(meta->mem_pool, handle);
+		ret = -ENOMEM;
+		goto out;
+	}
+
 	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
 
 	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
@@ -617,6 +652,9 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity)
 	struct zram_meta *meta;
 
 	down_write(&zram->init_lock);
+
+	zram->limit_pages = 0;
+
 	if (!init_done(zram)) {
 		up_write(&zram->init_lock);
 		return;
@@ -857,6 +895,8 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
 static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
 static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
 static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
+static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
+		mem_limit_store);
 static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
 		max_comp_streams_show, max_comp_streams_store);
 static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
@@ -885,6 +925,7 @@ static struct attribute *zram_disk_attrs[] = {
 	&dev_attr_orig_data_size.attr,
 	&dev_attr_compr_data_size.attr,
 	&dev_attr_mem_used_total.attr,
+	&dev_attr_mem_limit.attr,
 	&dev_attr_max_comp_streams.attr,
 	&dev_attr_comp_algorithm.attr,
 	NULL,
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index e0f725c87cc6..b7aa9c21553f 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -112,6 +112,11 @@ struct zram {
 	u64 disksize;	/* bytes */
 	int max_comp_streams;
 	struct zram_stats stats;
+	/*
+	 * the number of pages zram can consume for storing compressed data
+	 */
+	unsigned long limit_pages;
+
 	char compressor[10];
 };
 #endif
-- 
2.0.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 4/4] zram: report maximum used memory
  2014-08-21  0:27 ` Minchan Kim
@ 2014-08-21  0:27   ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Normally, zram user could get maximum memory usage zram consumed
via polling mem_used_total with sysfs in userspace.

But it has a critical problem because user can miss peak memory
usage during update inverval of polling. For avoiding that,
user should poll it with shorter interval(ie, 0.0000000001s)
with mlocking to avoid page fault delay when memory pressure
is heavy. It would be troublesome.

This patch adds new knob "mem_used_max" so user could see
the maximum memory usage easily via reading the knob and reset
it via "echo 0 > /sys/block/zram0/mem_used_max".

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
 Documentation/blockdev/zram.txt            |  1 +
 drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
 drivers/block/zram/zram_drv.h              |  1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
index 025331c19045..ffd1ea7443dd 100644
--- a/Documentation/ABI/testing/sysfs-block-zram
+++ b/Documentation/ABI/testing/sysfs-block-zram
@@ -120,6 +120,16 @@ Description:
 		statistic.
 		Unit: bytes
 
+What:		/sys/block/zram<id>/mem_used_max
+Date:		August 2014
+Contact:	Minchan Kim <minchan@kernel.org>
+Description:
+		The mem_used_max file is read/write and specifies the amount
+		of maximum memory zram have consumed to store compressed data.
+		For resetting the value, you should do "echo 0". Otherwise,
+		you could see -EINVAL.
+		Unit: bytes
+
 What:		/sys/block/zram<id>/mem_limit
 Date:		August 2014
 Contact:	Minchan Kim <minchan@kernel.org>
diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 9f239ff8c444..3b2247c2d4cf 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
 		orig_data_size
 		compr_data_size
 		mem_used_total
+		mem_used_max
 
 8) Deactivate:
 	swapoff /dev/zram0
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index adc91c7ecaef..138787579478 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
 	return len;
 }
 
+static ssize_t mem_used_max_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	u64 val = 0;
+	struct zram *zram = dev_to_zram(dev);
+
+	down_read(&zram->init_lock);
+	if (init_done(zram))
+		val = atomic64_read(&zram->stats.max_used_pages);
+	up_read(&zram->init_lock);
+
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
+}
+
+static ssize_t mem_used_max_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t len)
+{
+	int err;
+	unsigned long val;
+	struct zram *zram = dev_to_zram(dev);
+	struct zram_meta *meta = zram->meta;
+
+	err = kstrtoul(buf, 10, &val);
+	if (err || val != 0)
+		return -EINVAL;
+
+	down_read(&zram->init_lock);
+	if (init_done(zram))
+		atomic64_set(&zram->stats.max_used_pages,
+				zs_get_total_size(meta->mem_pool));
+	up_read(&zram->init_lock);
+
+	return len;
+}
+
 static ssize_t max_comp_streams_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
@@ -461,6 +496,18 @@ out_cleanup:
 	return ret;
 }
 
+static inline void update_used_max(struct zram *zram, const unsigned long pages)
+{
+	u64 old_max, cur_max;
+
+	do {
+		old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
+		if (pages > cur_max)
+			old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
+					cur_max, pages);
+	} while (old_max != cur_max);
+}
+
 static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 			   int offset)
 {
@@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 	struct zram_meta *meta = zram->meta;
 	struct zcomp_strm *zstrm;
 	bool locked = false;
+	unsigned long alloced_pages;
 
 	page = bvec->bv_page;
 	if (is_partial_io(bvec)) {
@@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		goto out;
 	}
 
-	if (zram->limit_pages &&
-		zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
+	alloced_pages = zs_get_total_size(meta->mem_pool);
+	if (zram->limit_pages && alloced_pages > zram->limit_pages) {
 		zs_free(meta->mem_pool, handle);
 		ret = -ENOMEM;
 		goto out;
 	}
 
+	update_used_max(zram, alloced_pages);
+
 	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
 
 	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
@@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
 static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
 static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
 		mem_limit_store);
+static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
+		mem_used_max_store);
 static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
 		max_comp_streams_show, max_comp_streams_store);
 static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
@@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
 	&dev_attr_compr_data_size.attr,
 	&dev_attr_mem_used_total.attr,
 	&dev_attr_mem_limit.attr,
+	&dev_attr_mem_used_max.attr,
 	&dev_attr_max_comp_streams.attr,
 	&dev_attr_comp_algorithm.attr,
 	NULL,
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index b7aa9c21553f..29383312d543 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -90,6 +90,7 @@ struct zram_stats {
 	atomic64_t notify_free;	/* no. of swap slot free notifications */
 	atomic64_t zero_pages;		/* no. of zero filled pages */
 	atomic64_t pages_stored;	/* no. of pages currently stored */
+	atomic64_t max_used_pages;	/* no. of maximum pages stored */
 };
 
 struct zram_meta {
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v3 4/4] zram: report maximum used memory
@ 2014-08-21  0:27   ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta,
	Seth Jennings, Dan Streetman, ds2horner, Minchan Kim

Normally, zram user could get maximum memory usage zram consumed
via polling mem_used_total with sysfs in userspace.

But it has a critical problem because user can miss peak memory
usage during update inverval of polling. For avoiding that,
user should poll it with shorter interval(ie, 0.0000000001s)
with mlocking to avoid page fault delay when memory pressure
is heavy. It would be troublesome.

This patch adds new knob "mem_used_max" so user could see
the maximum memory usage easily via reading the knob and reset
it via "echo 0 > /sys/block/zram0/mem_used_max".

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
 Documentation/blockdev/zram.txt            |  1 +
 drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
 drivers/block/zram/zram_drv.h              |  1 +
 4 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
index 025331c19045..ffd1ea7443dd 100644
--- a/Documentation/ABI/testing/sysfs-block-zram
+++ b/Documentation/ABI/testing/sysfs-block-zram
@@ -120,6 +120,16 @@ Description:
 		statistic.
 		Unit: bytes
 
+What:		/sys/block/zram<id>/mem_used_max
+Date:		August 2014
+Contact:	Minchan Kim <minchan@kernel.org>
+Description:
+		The mem_used_max file is read/write and specifies the amount
+		of maximum memory zram have consumed to store compressed data.
+		For resetting the value, you should do "echo 0". Otherwise,
+		you could see -EINVAL.
+		Unit: bytes
+
 What:		/sys/block/zram<id>/mem_limit
 Date:		August 2014
 Contact:	Minchan Kim <minchan@kernel.org>
diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 9f239ff8c444..3b2247c2d4cf 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
 		orig_data_size
 		compr_data_size
 		mem_used_total
+		mem_used_max
 
 8) Deactivate:
 	swapoff /dev/zram0
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index adc91c7ecaef..138787579478 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
 	return len;
 }
 
+static ssize_t mem_used_max_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	u64 val = 0;
+	struct zram *zram = dev_to_zram(dev);
+
+	down_read(&zram->init_lock);
+	if (init_done(zram))
+		val = atomic64_read(&zram->stats.max_used_pages);
+	up_read(&zram->init_lock);
+
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
+}
+
+static ssize_t mem_used_max_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t len)
+{
+	int err;
+	unsigned long val;
+	struct zram *zram = dev_to_zram(dev);
+	struct zram_meta *meta = zram->meta;
+
+	err = kstrtoul(buf, 10, &val);
+	if (err || val != 0)
+		return -EINVAL;
+
+	down_read(&zram->init_lock);
+	if (init_done(zram))
+		atomic64_set(&zram->stats.max_used_pages,
+				zs_get_total_size(meta->mem_pool));
+	up_read(&zram->init_lock);
+
+	return len;
+}
+
 static ssize_t max_comp_streams_store(struct device *dev,
 		struct device_attribute *attr, const char *buf, size_t len)
 {
@@ -461,6 +496,18 @@ out_cleanup:
 	return ret;
 }
 
+static inline void update_used_max(struct zram *zram, const unsigned long pages)
+{
+	u64 old_max, cur_max;
+
+	do {
+		old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
+		if (pages > cur_max)
+			old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
+					cur_max, pages);
+	} while (old_max != cur_max);
+}
+
 static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 			   int offset)
 {
@@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 	struct zram_meta *meta = zram->meta;
 	struct zcomp_strm *zstrm;
 	bool locked = false;
+	unsigned long alloced_pages;
 
 	page = bvec->bv_page;
 	if (is_partial_io(bvec)) {
@@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 		goto out;
 	}
 
-	if (zram->limit_pages &&
-		zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
+	alloced_pages = zs_get_total_size(meta->mem_pool);
+	if (zram->limit_pages && alloced_pages > zram->limit_pages) {
 		zs_free(meta->mem_pool, handle);
 		ret = -ENOMEM;
 		goto out;
 	}
 
+	update_used_max(zram, alloced_pages);
+
 	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
 
 	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
@@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
 static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
 static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
 		mem_limit_store);
+static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
+		mem_used_max_store);
 static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
 		max_comp_streams_show, max_comp_streams_store);
 static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
@@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
 	&dev_attr_compr_data_size.attr,
 	&dev_attr_mem_used_total.attr,
 	&dev_attr_mem_limit.attr,
+	&dev_attr_mem_used_max.attr,
 	&dev_attr_max_comp_streams.attr,
 	&dev_attr_comp_algorithm.attr,
 	NULL,
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index b7aa9c21553f..29383312d543 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -90,6 +90,7 @@ struct zram_stats {
 	atomic64_t notify_free;	/* no. of swap slot free notifications */
 	atomic64_t zero_pages;		/* no. of zero filled pages */
 	atomic64_t pages_stored;	/* no. of pages currently stored */
+	atomic64_t max_used_pages;	/* no. of maximum pages stored */
 };
 
 struct zram_meta {
-- 
2.0.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
  2014-08-21  0:27   ` Minchan Kim
@ 2014-08-21  2:20     ` David Horner
  -1 siblings, 0 replies; 30+ messages in thread
From: David Horner @ 2014-08-21  2:20 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> Normally, zram user could get maximum memory usage zram consumed
> via polling mem_used_total with sysfs in userspace.
>
> But it has a critical problem because user can miss peak memory
> usage during update inverval of polling. For avoiding that,
> user should poll it with shorter interval(ie, 0.0000000001s)
> with mlocking to avoid page fault delay when memory pressure
> is heavy. It would be troublesome.
>
> This patch adds new knob "mem_used_max" so user could see
> the maximum memory usage easily via reading the knob and reset
> it via "echo 0 > /sys/block/zram0/mem_used_max".
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
>  Documentation/blockdev/zram.txt            |  1 +
>  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
>  drivers/block/zram/zram_drv.h              |  1 +
>  4 files changed, 67 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> index 025331c19045..ffd1ea7443dd 100644
> --- a/Documentation/ABI/testing/sysfs-block-zram
> +++ b/Documentation/ABI/testing/sysfs-block-zram
> @@ -120,6 +120,16 @@ Description:
>                 statistic.
>                 Unit: bytes
>
> +What:          /sys/block/zram<id>/mem_used_max
> +Date:          August 2014
> +Contact:       Minchan Kim <minchan@kernel.org>
> +Description:
> +               The mem_used_max file is read/write and specifies the amount
> +               of maximum memory zram have consumed to store compressed data.
> +               For resetting the value, you should do "echo 0". Otherwise,
> +               you could see -EINVAL.
> +               Unit: bytes
> +
>  What:          /sys/block/zram<id>/mem_limit
>  Date:          August 2014
>  Contact:       Minchan Kim <minchan@kernel.org>
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index 9f239ff8c444..3b2247c2d4cf 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
>                 orig_data_size
>                 compr_data_size
>                 mem_used_total
> +               mem_used_max
>
>  8) Deactivate:
>         swapoff /dev/zram0
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index adc91c7ecaef..138787579478 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
>         return len;
>  }
>
> +static ssize_t mem_used_max_show(struct device *dev,
> +               struct device_attribute *attr, char *buf)
> +{
> +       u64 val = 0;
> +       struct zram *zram = dev_to_zram(dev);
> +
> +       down_read(&zram->init_lock);
> +       if (init_done(zram))
> +               val = atomic64_read(&zram->stats.max_used_pages);
> +       up_read(&zram->init_lock);
> +
> +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> +}
> +
> +static ssize_t mem_used_max_store(struct device *dev,
> +               struct device_attribute *attr, const char *buf, size_t len)
> +{
> +       int err;
> +       unsigned long val;
> +       struct zram *zram = dev_to_zram(dev);
> +       struct zram_meta *meta = zram->meta;
> +
> +       err = kstrtoul(buf, 10, &val);
> +       if (err || val != 0)
> +               return -EINVAL;
> +

Yes - this works better for the user than explicit single "0" check
Thanks for testing.

> +       down_read(&zram->init_lock);
> +       if (init_done(zram))
> +               atomic64_set(&zram->stats.max_used_pages,
> +                               zs_get_total_size(meta->mem_pool));
> +       up_read(&zram->init_lock);
> +
> +       return len;
> +}
> +
>  static ssize_t max_comp_streams_store(struct device *dev,
>                 struct device_attribute *attr, const char *buf, size_t len)
>  {
> @@ -461,6 +496,18 @@ out_cleanup:
>         return ret;
>  }
>
> +static inline void update_used_max(struct zram *zram, const unsigned long pages)
> +{
> +       u64 old_max, cur_max;
> +
> +       do {
> +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
> +               if (pages > cur_max)
> +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> +                                       cur_max, pages);
> +       } while (old_max != cur_max);
> +}
> +

This can be tightened up some:

+static inline void update_used_max(struct zram *zram, const unsigned
long pages)
+{
+       u64 prev_max, old_max = 0;
+
+       prev_max = atomic64_read(&zram->stats.max_used_pages);
+       do while (pages > prev_max && prev_max != old_max) {
+               old_max = prev_max;
+               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
+                                       old_max, pages);
+       };
+}
+

And then can be generalized to:


+static inline void update_max(uint64_t *addr, const uint64_t newvalue)
+{
+       uint64_t prev_max, old_max = 0;
+
+       prev_max = atomic64_read(addr);
+       do while (pages > prev_max && prev_max != old_max) {
+               old_max = prev_max;
+               prev_max = atomic64_cmpxchg(addr,
+                                       old_max, newvalue);
+       };
+}
+


Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).

(I hope I got all the datatypes (I understand there are some implicit
casts) and modes correct).

The idea should be clear.

>  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>                            int offset)
>  {
> @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>         struct zram_meta *meta = zram->meta;
>         struct zcomp_strm *zstrm;
>         bool locked = false;
> +       unsigned long alloced_pages;
>
>         page = bvec->bv_page;
>         if (is_partial_io(bvec)) {
> @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>                 goto out;
>         }
>
> -       if (zram->limit_pages &&
> -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> +       alloced_pages = zs_get_total_size(meta->mem_pool);
> +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
>                 zs_free(meta->mem_pool, handle);
>                 ret = -ENOMEM;
>                 goto out;
>         }
>
> +       update_used_max(zram, alloced_pages);
> +

or generalized with update_max

>         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
>
>         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
>  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
>                 mem_limit_store);
> +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
> +               mem_used_max_store);
>  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>                 max_comp_streams_show, max_comp_streams_store);
>  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
>         &dev_attr_compr_data_size.attr,
>         &dev_attr_mem_used_total.attr,
>         &dev_attr_mem_limit.attr,
> +       &dev_attr_mem_used_max.attr,
>         &dev_attr_max_comp_streams.attr,
>         &dev_attr_comp_algorithm.attr,
>         NULL,
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index b7aa9c21553f..29383312d543 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -90,6 +90,7 @@ struct zram_stats {
>         atomic64_t notify_free; /* no. of swap slot free notifications */
>         atomic64_t zero_pages;          /* no. of zero filled pages */
>         atomic64_t pages_stored;        /* no. of pages currently stored */
> +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
>  };
>
>  struct zram_meta {
> --
> 2.0.0
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
@ 2014-08-21  2:20     ` David Horner
  0 siblings, 0 replies; 30+ messages in thread
From: David Horner @ 2014-08-21  2:20 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> Normally, zram user could get maximum memory usage zram consumed
> via polling mem_used_total with sysfs in userspace.
>
> But it has a critical problem because user can miss peak memory
> usage during update inverval of polling. For avoiding that,
> user should poll it with shorter interval(ie, 0.0000000001s)
> with mlocking to avoid page fault delay when memory pressure
> is heavy. It would be troublesome.
>
> This patch adds new knob "mem_used_max" so user could see
> the maximum memory usage easily via reading the knob and reset
> it via "echo 0 > /sys/block/zram0/mem_used_max".
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
>  Documentation/blockdev/zram.txt            |  1 +
>  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
>  drivers/block/zram/zram_drv.h              |  1 +
>  4 files changed, 67 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> index 025331c19045..ffd1ea7443dd 100644
> --- a/Documentation/ABI/testing/sysfs-block-zram
> +++ b/Documentation/ABI/testing/sysfs-block-zram
> @@ -120,6 +120,16 @@ Description:
>                 statistic.
>                 Unit: bytes
>
> +What:          /sys/block/zram<id>/mem_used_max
> +Date:          August 2014
> +Contact:       Minchan Kim <minchan@kernel.org>
> +Description:
> +               The mem_used_max file is read/write and specifies the amount
> +               of maximum memory zram have consumed to store compressed data.
> +               For resetting the value, you should do "echo 0". Otherwise,
> +               you could see -EINVAL.
> +               Unit: bytes
> +
>  What:          /sys/block/zram<id>/mem_limit
>  Date:          August 2014
>  Contact:       Minchan Kim <minchan@kernel.org>
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index 9f239ff8c444..3b2247c2d4cf 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
>                 orig_data_size
>                 compr_data_size
>                 mem_used_total
> +               mem_used_max
>
>  8) Deactivate:
>         swapoff /dev/zram0
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index adc91c7ecaef..138787579478 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
>         return len;
>  }
>
> +static ssize_t mem_used_max_show(struct device *dev,
> +               struct device_attribute *attr, char *buf)
> +{
> +       u64 val = 0;
> +       struct zram *zram = dev_to_zram(dev);
> +
> +       down_read(&zram->init_lock);
> +       if (init_done(zram))
> +               val = atomic64_read(&zram->stats.max_used_pages);
> +       up_read(&zram->init_lock);
> +
> +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> +}
> +
> +static ssize_t mem_used_max_store(struct device *dev,
> +               struct device_attribute *attr, const char *buf, size_t len)
> +{
> +       int err;
> +       unsigned long val;
> +       struct zram *zram = dev_to_zram(dev);
> +       struct zram_meta *meta = zram->meta;
> +
> +       err = kstrtoul(buf, 10, &val);
> +       if (err || val != 0)
> +               return -EINVAL;
> +

Yes - this works better for the user than explicit single "0" check
Thanks for testing.

> +       down_read(&zram->init_lock);
> +       if (init_done(zram))
> +               atomic64_set(&zram->stats.max_used_pages,
> +                               zs_get_total_size(meta->mem_pool));
> +       up_read(&zram->init_lock);
> +
> +       return len;
> +}
> +
>  static ssize_t max_comp_streams_store(struct device *dev,
>                 struct device_attribute *attr, const char *buf, size_t len)
>  {
> @@ -461,6 +496,18 @@ out_cleanup:
>         return ret;
>  }
>
> +static inline void update_used_max(struct zram *zram, const unsigned long pages)
> +{
> +       u64 old_max, cur_max;
> +
> +       do {
> +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
> +               if (pages > cur_max)
> +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> +                                       cur_max, pages);
> +       } while (old_max != cur_max);
> +}
> +

This can be tightened up some:

+static inline void update_used_max(struct zram *zram, const unsigned
long pages)
+{
+       u64 prev_max, old_max = 0;
+
+       prev_max = atomic64_read(&zram->stats.max_used_pages);
+       do while (pages > prev_max && prev_max != old_max) {
+               old_max = prev_max;
+               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
+                                       old_max, pages);
+       };
+}
+

And then can be generalized to:


+static inline void update_max(uint64_t *addr, const uint64_t newvalue)
+{
+       uint64_t prev_max, old_max = 0;
+
+       prev_max = atomic64_read(addr);
+       do while (pages > prev_max && prev_max != old_max) {
+               old_max = prev_max;
+               prev_max = atomic64_cmpxchg(addr,
+                                       old_max, newvalue);
+       };
+}
+


Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).

(I hope I got all the datatypes (I understand there are some implicit
casts) and modes correct).

The idea should be clear.

>  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>                            int offset)
>  {
> @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>         struct zram_meta *meta = zram->meta;
>         struct zcomp_strm *zstrm;
>         bool locked = false;
> +       unsigned long alloced_pages;
>
>         page = bvec->bv_page;
>         if (is_partial_io(bvec)) {
> @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>                 goto out;
>         }
>
> -       if (zram->limit_pages &&
> -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> +       alloced_pages = zs_get_total_size(meta->mem_pool);
> +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
>                 zs_free(meta->mem_pool, handle);
>                 ret = -ENOMEM;
>                 goto out;
>         }
>
> +       update_used_max(zram, alloced_pages);
> +

or generalized with update_max

>         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
>
>         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
>  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
>                 mem_limit_store);
> +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
> +               mem_used_max_store);
>  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>                 max_comp_streams_show, max_comp_streams_store);
>  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
>         &dev_attr_compr_data_size.attr,
>         &dev_attr_mem_used_total.attr,
>         &dev_attr_mem_limit.attr,
> +       &dev_attr_mem_used_max.attr,
>         &dev_attr_max_comp_streams.attr,
>         &dev_attr_comp_algorithm.attr,
>         NULL,
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index b7aa9c21553f..29383312d543 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -90,6 +90,7 @@ struct zram_stats {
>         atomic64_t notify_free; /* no. of swap slot free notifications */
>         atomic64_t zero_pages;          /* no. of zero filled pages */
>         atomic64_t pages_stored;        /* no. of pages currently stored */
> +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
>  };
>
>  struct zram_meta {
> --
> 2.0.0
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
  2014-08-21  2:20     ` David Horner
@ 2014-08-21  2:41       ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  2:41 UTC (permalink / raw)
  To: David Horner
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 10:20:07PM -0400, David Horner wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > Normally, zram user could get maximum memory usage zram consumed
> > via polling mem_used_total with sysfs in userspace.
> >
> > But it has a critical problem because user can miss peak memory
> > usage during update inverval of polling. For avoiding that,
> > user should poll it with shorter interval(ie, 0.0000000001s)
> > with mlocking to avoid page fault delay when memory pressure
> > is heavy. It would be troublesome.
> >
> > This patch adds new knob "mem_used_max" so user could see
> > the maximum memory usage easily via reading the knob and reset
> > it via "echo 0 > /sys/block/zram0/mem_used_max".
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
> >  Documentation/blockdev/zram.txt            |  1 +
> >  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
> >  drivers/block/zram/zram_drv.h              |  1 +
> >  4 files changed, 67 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> > index 025331c19045..ffd1ea7443dd 100644
> > --- a/Documentation/ABI/testing/sysfs-block-zram
> > +++ b/Documentation/ABI/testing/sysfs-block-zram
> > @@ -120,6 +120,16 @@ Description:
> >                 statistic.
> >                 Unit: bytes
> >
> > +What:          /sys/block/zram<id>/mem_used_max
> > +Date:          August 2014
> > +Contact:       Minchan Kim <minchan@kernel.org>
> > +Description:
> > +               The mem_used_max file is read/write and specifies the amount
> > +               of maximum memory zram have consumed to store compressed data.
> > +               For resetting the value, you should do "echo 0". Otherwise,
> > +               you could see -EINVAL.
> > +               Unit: bytes
> > +
> >  What:          /sys/block/zram<id>/mem_limit
> >  Date:          August 2014
> >  Contact:       Minchan Kim <minchan@kernel.org>
> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> > index 9f239ff8c444..3b2247c2d4cf 100644
> > --- a/Documentation/blockdev/zram.txt
> > +++ b/Documentation/blockdev/zram.txt
> > @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
> >                 orig_data_size
> >                 compr_data_size
> >                 mem_used_total
> > +               mem_used_max
> >
> >  8) Deactivate:
> >         swapoff /dev/zram0
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index adc91c7ecaef..138787579478 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
> >         return len;
> >  }
> >
> > +static ssize_t mem_used_max_show(struct device *dev,
> > +               struct device_attribute *attr, char *buf)
> > +{
> > +       u64 val = 0;
> > +       struct zram *zram = dev_to_zram(dev);
> > +
> > +       down_read(&zram->init_lock);
> > +       if (init_done(zram))
> > +               val = atomic64_read(&zram->stats.max_used_pages);
> > +       up_read(&zram->init_lock);
> > +
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> > +}
> > +
> > +static ssize_t mem_used_max_store(struct device *dev,
> > +               struct device_attribute *attr, const char *buf, size_t len)
> > +{
> > +       int err;
> > +       unsigned long val;
> > +       struct zram *zram = dev_to_zram(dev);
> > +       struct zram_meta *meta = zram->meta;
> > +
> > +       err = kstrtoul(buf, 10, &val);
> > +       if (err || val != 0)
> > +               return -EINVAL;
> > +
> 
> Yes - this works better for the user than explicit single "0" check
> Thanks for testing.
> 
> > +       down_read(&zram->init_lock);
> > +       if (init_done(zram))
> > +               atomic64_set(&zram->stats.max_used_pages,
> > +                               zs_get_total_size(meta->mem_pool));
> > +       up_read(&zram->init_lock);
> > +
> > +       return len;
> > +}
> > +
> >  static ssize_t max_comp_streams_store(struct device *dev,
> >                 struct device_attribute *attr, const char *buf, size_t len)
> >  {
> > @@ -461,6 +496,18 @@ out_cleanup:
> >         return ret;
> >  }
> >
> > +static inline void update_used_max(struct zram *zram, const unsigned long pages)
> > +{
> > +       u64 old_max, cur_max;
> > +
> > +       do {
> > +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
> > +               if (pages > cur_max)
> > +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> > +                                       cur_max, pages);
> > +       } while (old_max != cur_max);
> > +}
> > +
> 
> This can be tightened up some:

How many does it make tight?
If it's not a big, I'd like to stick my version.

> 
> +static inline void update_used_max(struct zram *zram, const unsigned
> long pages)
> +{
> +       u64 prev_max, old_max = 0;
> +
> +       prev_max = atomic64_read(&zram->stats.max_used_pages);
> +       do while (pages > prev_max && prev_max != old_max) {
> +               old_max = prev_max;
> +               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> +                                       old_max, pages);
> +       };
> +}
> +
> 
> And then can be generalized to:

At the moment, we don't need it but it would be good if we need another
max in future so I will keep in mind. Thanks for the suggestion!

> 
> 
> +static inline void update_max(uint64_t *addr, const uint64_t newvalue)
> +{
> +       uint64_t prev_max, old_max = 0;
> +
> +       prev_max = atomic64_read(addr);
> +       do while (pages > prev_max && prev_max != old_max) {
> +               old_max = prev_max;
> +               prev_max = atomic64_cmpxchg(addr,
> +                                       old_max, newvalue);
> +       };
> +}
> +
> 
> 
> Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).
> 
> (I hope I got all the datatypes (I understand there are some implicit
> casts) and modes correct).
> 
> The idea should be clear.
> 
> >  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >                            int offset)
> >  {
> > @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >         struct zram_meta *meta = zram->meta;
> >         struct zcomp_strm *zstrm;
> >         bool locked = false;
> > +       unsigned long alloced_pages;
> >
> >         page = bvec->bv_page;
> >         if (is_partial_io(bvec)) {
> > @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >                 goto out;
> >         }
> >
> > -       if (zram->limit_pages &&
> > -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> > +       alloced_pages = zs_get_total_size(meta->mem_pool);
> > +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
> >                 zs_free(meta->mem_pool, handle);
> >                 ret = -ENOMEM;
> >                 goto out;
> >         }
> >
> > +       update_used_max(zram, alloced_pages);
> > +
> 
> or generalized with update_max
> 
> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
> >
> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> > @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> >  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> >                 mem_limit_store);
> > +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
> > +               mem_used_max_store);
> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
> >                 max_comp_streams_show, max_comp_streams_store);
> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> > @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
> >         &dev_attr_compr_data_size.attr,
> >         &dev_attr_mem_used_total.attr,
> >         &dev_attr_mem_limit.attr,
> > +       &dev_attr_mem_used_max.attr,
> >         &dev_attr_max_comp_streams.attr,
> >         &dev_attr_comp_algorithm.attr,
> >         NULL,
> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> > index b7aa9c21553f..29383312d543 100644
> > --- a/drivers/block/zram/zram_drv.h
> > +++ b/drivers/block/zram/zram_drv.h
> > @@ -90,6 +90,7 @@ struct zram_stats {
> >         atomic64_t notify_free; /* no. of swap slot free notifications */
> >         atomic64_t zero_pages;          /* no. of zero filled pages */
> >         atomic64_t pages_stored;        /* no. of pages currently stored */
> > +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
> >  };
> >
> >  struct zram_meta {
> > --
> > 2.0.0
> >
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
@ 2014-08-21  2:41       ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  2:41 UTC (permalink / raw)
  To: David Horner
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 10:20:07PM -0400, David Horner wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > Normally, zram user could get maximum memory usage zram consumed
> > via polling mem_used_total with sysfs in userspace.
> >
> > But it has a critical problem because user can miss peak memory
> > usage during update inverval of polling. For avoiding that,
> > user should poll it with shorter interval(ie, 0.0000000001s)
> > with mlocking to avoid page fault delay when memory pressure
> > is heavy. It would be troublesome.
> >
> > This patch adds new knob "mem_used_max" so user could see
> > the maximum memory usage easily via reading the knob and reset
> > it via "echo 0 > /sys/block/zram0/mem_used_max".
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
> >  Documentation/blockdev/zram.txt            |  1 +
> >  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
> >  drivers/block/zram/zram_drv.h              |  1 +
> >  4 files changed, 67 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> > index 025331c19045..ffd1ea7443dd 100644
> > --- a/Documentation/ABI/testing/sysfs-block-zram
> > +++ b/Documentation/ABI/testing/sysfs-block-zram
> > @@ -120,6 +120,16 @@ Description:
> >                 statistic.
> >                 Unit: bytes
> >
> > +What:          /sys/block/zram<id>/mem_used_max
> > +Date:          August 2014
> > +Contact:       Minchan Kim <minchan@kernel.org>
> > +Description:
> > +               The mem_used_max file is read/write and specifies the amount
> > +               of maximum memory zram have consumed to store compressed data.
> > +               For resetting the value, you should do "echo 0". Otherwise,
> > +               you could see -EINVAL.
> > +               Unit: bytes
> > +
> >  What:          /sys/block/zram<id>/mem_limit
> >  Date:          August 2014
> >  Contact:       Minchan Kim <minchan@kernel.org>
> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> > index 9f239ff8c444..3b2247c2d4cf 100644
> > --- a/Documentation/blockdev/zram.txt
> > +++ b/Documentation/blockdev/zram.txt
> > @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
> >                 orig_data_size
> >                 compr_data_size
> >                 mem_used_total
> > +               mem_used_max
> >
> >  8) Deactivate:
> >         swapoff /dev/zram0
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index adc91c7ecaef..138787579478 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
> >         return len;
> >  }
> >
> > +static ssize_t mem_used_max_show(struct device *dev,
> > +               struct device_attribute *attr, char *buf)
> > +{
> > +       u64 val = 0;
> > +       struct zram *zram = dev_to_zram(dev);
> > +
> > +       down_read(&zram->init_lock);
> > +       if (init_done(zram))
> > +               val = atomic64_read(&zram->stats.max_used_pages);
> > +       up_read(&zram->init_lock);
> > +
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> > +}
> > +
> > +static ssize_t mem_used_max_store(struct device *dev,
> > +               struct device_attribute *attr, const char *buf, size_t len)
> > +{
> > +       int err;
> > +       unsigned long val;
> > +       struct zram *zram = dev_to_zram(dev);
> > +       struct zram_meta *meta = zram->meta;
> > +
> > +       err = kstrtoul(buf, 10, &val);
> > +       if (err || val != 0)
> > +               return -EINVAL;
> > +
> 
> Yes - this works better for the user than explicit single "0" check
> Thanks for testing.
> 
> > +       down_read(&zram->init_lock);
> > +       if (init_done(zram))
> > +               atomic64_set(&zram->stats.max_used_pages,
> > +                               zs_get_total_size(meta->mem_pool));
> > +       up_read(&zram->init_lock);
> > +
> > +       return len;
> > +}
> > +
> >  static ssize_t max_comp_streams_store(struct device *dev,
> >                 struct device_attribute *attr, const char *buf, size_t len)
> >  {
> > @@ -461,6 +496,18 @@ out_cleanup:
> >         return ret;
> >  }
> >
> > +static inline void update_used_max(struct zram *zram, const unsigned long pages)
> > +{
> > +       u64 old_max, cur_max;
> > +
> > +       do {
> > +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
> > +               if (pages > cur_max)
> > +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> > +                                       cur_max, pages);
> > +       } while (old_max != cur_max);
> > +}
> > +
> 
> This can be tightened up some:

How many does it make tight?
If it's not a big, I'd like to stick my version.

> 
> +static inline void update_used_max(struct zram *zram, const unsigned
> long pages)
> +{
> +       u64 prev_max, old_max = 0;
> +
> +       prev_max = atomic64_read(&zram->stats.max_used_pages);
> +       do while (pages > prev_max && prev_max != old_max) {
> +               old_max = prev_max;
> +               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> +                                       old_max, pages);
> +       };
> +}
> +
> 
> And then can be generalized to:

At the moment, we don't need it but it would be good if we need another
max in future so I will keep in mind. Thanks for the suggestion!

> 
> 
> +static inline void update_max(uint64_t *addr, const uint64_t newvalue)
> +{
> +       uint64_t prev_max, old_max = 0;
> +
> +       prev_max = atomic64_read(addr);
> +       do while (pages > prev_max && prev_max != old_max) {
> +               old_max = prev_max;
> +               prev_max = atomic64_cmpxchg(addr,
> +                                       old_max, newvalue);
> +       };
> +}
> +
> 
> 
> Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).
> 
> (I hope I got all the datatypes (I understand there are some implicit
> casts) and modes correct).
> 
> The idea should be clear.
> 
> >  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >                            int offset)
> >  {
> > @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >         struct zram_meta *meta = zram->meta;
> >         struct zcomp_strm *zstrm;
> >         bool locked = false;
> > +       unsigned long alloced_pages;
> >
> >         page = bvec->bv_page;
> >         if (is_partial_io(bvec)) {
> > @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >                 goto out;
> >         }
> >
> > -       if (zram->limit_pages &&
> > -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> > +       alloced_pages = zs_get_total_size(meta->mem_pool);
> > +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
> >                 zs_free(meta->mem_pool, handle);
> >                 ret = -ENOMEM;
> >                 goto out;
> >         }
> >
> > +       update_used_max(zram, alloced_pages);
> > +
> 
> or generalized with update_max
> 
> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
> >
> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> > @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> >  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> >                 mem_limit_store);
> > +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
> > +               mem_used_max_store);
> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
> >                 max_comp_streams_show, max_comp_streams_store);
> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> > @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
> >         &dev_attr_compr_data_size.attr,
> >         &dev_attr_mem_used_total.attr,
> >         &dev_attr_mem_limit.attr,
> > +       &dev_attr_mem_used_max.attr,
> >         &dev_attr_max_comp_streams.attr,
> >         &dev_attr_comp_algorithm.attr,
> >         NULL,
> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> > index b7aa9c21553f..29383312d543 100644
> > --- a/drivers/block/zram/zram_drv.h
> > +++ b/drivers/block/zram/zram_drv.h
> > @@ -90,6 +90,7 @@ struct zram_stats {
> >         atomic64_t notify_free; /* no. of swap slot free notifications */
> >         atomic64_t zero_pages;          /* no. of zero filled pages */
> >         atomic64_t pages_stored;        /* no. of pages currently stored */
> > +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
> >  };
> >
> >  struct zram_meta {
> > --
> > 2.0.0
> >
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
  2014-08-21  2:41       ` Minchan Kim
@ 2014-08-21  3:03         ` David Horner
  -1 siblings, 0 replies; 30+ messages in thread
From: David Horner @ 2014-08-21  3:03 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 10:41 PM, Minchan Kim <minchan@kernel.org> wrote:
> On Wed, Aug 20, 2014 at 10:20:07PM -0400, David Horner wrote:
>> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
>> > Normally, zram user could get maximum memory usage zram consumed
>> > via polling mem_used_total with sysfs in userspace.
>> >
>> > But it has a critical problem because user can miss peak memory
>> > usage during update inverval of polling. For avoiding that,
>> > user should poll it with shorter interval(ie, 0.0000000001s)
>> > with mlocking to avoid page fault delay when memory pressure
>> > is heavy. It would be troublesome.
>> >
>> > This patch adds new knob "mem_used_max" so user could see
>> > the maximum memory usage easily via reading the knob and reset
>> > it via "echo 0 > /sys/block/zram0/mem_used_max".
>> >
>> > Signed-off-by: Minchan Kim <minchan@kernel.org>
>> > ---
>> >  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
>> >  Documentation/blockdev/zram.txt            |  1 +
>> >  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
>> >  drivers/block/zram/zram_drv.h              |  1 +
>> >  4 files changed, 67 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
>> > index 025331c19045..ffd1ea7443dd 100644
>> > --- a/Documentation/ABI/testing/sysfs-block-zram
>> > +++ b/Documentation/ABI/testing/sysfs-block-zram
>> > @@ -120,6 +120,16 @@ Description:
>> >                 statistic.
>> >                 Unit: bytes
>> >
>> > +What:          /sys/block/zram<id>/mem_used_max
>> > +Date:          August 2014
>> > +Contact:       Minchan Kim <minchan@kernel.org>
>> > +Description:
>> > +               The mem_used_max file is read/write and specifies the amount
>> > +               of maximum memory zram have consumed to store compressed data.
>> > +               For resetting the value, you should do "echo 0". Otherwise,
>> > +               you could see -EINVAL.
>> > +               Unit: bytes
>> > +
>> >  What:          /sys/block/zram<id>/mem_limit
>> >  Date:          August 2014
>> >  Contact:       Minchan Kim <minchan@kernel.org>
>> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
>> > index 9f239ff8c444..3b2247c2d4cf 100644
>> > --- a/Documentation/blockdev/zram.txt
>> > +++ b/Documentation/blockdev/zram.txt
>> > @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
>> >                 orig_data_size
>> >                 compr_data_size
>> >                 mem_used_total
>> > +               mem_used_max
>> >
>> >  8) Deactivate:
>> >         swapoff /dev/zram0
>> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
>> > index adc91c7ecaef..138787579478 100644
>> > --- a/drivers/block/zram/zram_drv.c
>> > +++ b/drivers/block/zram/zram_drv.c
>> > @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
>> >         return len;
>> >  }
>> >
>> > +static ssize_t mem_used_max_show(struct device *dev,
>> > +               struct device_attribute *attr, char *buf)
>> > +{
>> > +       u64 val = 0;
>> > +       struct zram *zram = dev_to_zram(dev);
>> > +
>> > +       down_read(&zram->init_lock);
>> > +       if (init_done(zram))
>> > +               val = atomic64_read(&zram->stats.max_used_pages);
>> > +       up_read(&zram->init_lock);
>> > +
>> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
>> > +}
>> > +
>> > +static ssize_t mem_used_max_store(struct device *dev,
>> > +               struct device_attribute *attr, const char *buf, size_t len)
>> > +{
>> > +       int err;
>> > +       unsigned long val;
>> > +       struct zram *zram = dev_to_zram(dev);
>> > +       struct zram_meta *meta = zram->meta;
>> > +
>> > +       err = kstrtoul(buf, 10, &val);
>> > +       if (err || val != 0)
>> > +               return -EINVAL;
>> > +
>>
>> Yes - this works better for the user than explicit single "0" check
>> Thanks for testing.
>>
>> > +       down_read(&zram->init_lock);
>> > +       if (init_done(zram))
>> > +               atomic64_set(&zram->stats.max_used_pages,
>> > +                               zs_get_total_size(meta->mem_pool));
>> > +       up_read(&zram->init_lock);
>> > +
>> > +       return len;
>> > +}
>> > +
>> >  static ssize_t max_comp_streams_store(struct device *dev,
>> >                 struct device_attribute *attr, const char *buf, size_t len)
>> >  {
>> > @@ -461,6 +496,18 @@ out_cleanup:
>> >         return ret;
>> >  }
>> >
>> > +static inline void update_used_max(struct zram *zram, const unsigned long pages)
>> > +{
>> > +       u64 old_max, cur_max;
>> > +
>> > +       do {
>> > +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
>> > +               if (pages > cur_max)
>> > +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
>> > +                                       cur_max, pages);
>> > +       } while (old_max != cur_max);
>> > +}
>> > +
>>
>> This can be tightened up some:
>
> How many does it make tight?
> If it's not a big, I'd like to stick my version.

It is another atomic (memory barrier) read each time through the loop.

But the expect usual case will be the uncontested case - a drop through.

It would only show up in heavily contested situations -

this is another variation that removes the unnecessary read
 but retains the programming structure:and naming:

+static inline void update_used_max(struct zram *zram, const unsigned
long pages)
 +{
 +       u64 old_max, cur_max;
 +
 +       old_max = atomic64_read(&zram->stats.max_used_pages);
 +       do {
 +               cur_max = old_max;
 +               if (pages > cur_max)
 +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
 +                                       cur_max, pages);
 +       } while (old_max != cur_max);
 +}
 +


>
>>
>> +static inline void update_used_max(struct zram *zram, const unsigned
>> long pages)
>> +{
>> +       u64 prev_max, old_max = 0;
>> +
>> +       prev_max = atomic64_read(&zram->stats.max_used_pages);
>> +       do while (pages > prev_max && prev_max != old_max) {
>> +               old_max = prev_max;
>> +               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
>> +                                       old_max, pages);
>> +       };
>> +}
>> +
>>
>> And then can be generalized to:
>
> At the moment, we don't need it but it would be good if we need another
> max in future so I will keep in mind. Thanks for the suggestion!
>
>>
>>
>> +static inline void update_max(uint64_t *addr, const uint64_t newvalue)
>> +{
>> +       uint64_t prev_max, old_max = 0;
>> +
>> +       prev_max = atomic64_read(addr);
>> +       do while (pages > prev_max && prev_max != old_max) {
>> +               old_max = prev_max;
>> +               prev_max = atomic64_cmpxchg(addr,
>> +                                       old_max, newvalue);
>> +       };
>> +}
>> +
>>
>>
>> Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).
>>
>> (I hope I got all the datatypes (I understand there are some implicit
>> casts) and modes correct).
>>
>> The idea should be clear.
>>
>> >  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> >                            int offset)
>> >  {
>> > @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> >         struct zram_meta *meta = zram->meta;
>> >         struct zcomp_strm *zstrm;
>> >         bool locked = false;
>> > +       unsigned long alloced_pages;
>> >
>> >         page = bvec->bv_page;
>> >         if (is_partial_io(bvec)) {
>> > @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> >                 goto out;
>> >         }
>> >
>> > -       if (zram->limit_pages &&
>> > -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
>> > +       alloced_pages = zs_get_total_size(meta->mem_pool);
>> > +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
>> >                 zs_free(meta->mem_pool, handle);
>> >                 ret = -ENOMEM;
>> >                 goto out;
>> >         }
>> >
>> > +       update_used_max(zram, alloced_pages);
>> > +
>>
>> or generalized with update_max
>>
>> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
>> >
>> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
>> > @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
>> >  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
>> >                 mem_limit_store);
>> > +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
>> > +               mem_used_max_store);
>> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>> >                 max_comp_streams_show, max_comp_streams_store);
>> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
>> > @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
>> >         &dev_attr_compr_data_size.attr,
>> >         &dev_attr_mem_used_total.attr,
>> >         &dev_attr_mem_limit.attr,
>> > +       &dev_attr_mem_used_max.attr,
>> >         &dev_attr_max_comp_streams.attr,
>> >         &dev_attr_comp_algorithm.attr,
>> >         NULL,
>> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
>> > index b7aa9c21553f..29383312d543 100644
>> > --- a/drivers/block/zram/zram_drv.h
>> > +++ b/drivers/block/zram/zram_drv.h
>> > @@ -90,6 +90,7 @@ struct zram_stats {
>> >         atomic64_t notify_free; /* no. of swap slot free notifications */
>> >         atomic64_t zero_pages;          /* no. of zero filled pages */
>> >         atomic64_t pages_stored;        /* no. of pages currently stored */
>> > +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
>> >  };
>> >
>> >  struct zram_meta {
>> > --
>> > 2.0.0
>> >
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> --
> Kind regards,
> Minchan Kim

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
@ 2014-08-21  3:03         ` David Horner
  0 siblings, 0 replies; 30+ messages in thread
From: David Horner @ 2014-08-21  3:03 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 10:41 PM, Minchan Kim <minchan@kernel.org> wrote:
> On Wed, Aug 20, 2014 at 10:20:07PM -0400, David Horner wrote:
>> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
>> > Normally, zram user could get maximum memory usage zram consumed
>> > via polling mem_used_total with sysfs in userspace.
>> >
>> > But it has a critical problem because user can miss peak memory
>> > usage during update inverval of polling. For avoiding that,
>> > user should poll it with shorter interval(ie, 0.0000000001s)
>> > with mlocking to avoid page fault delay when memory pressure
>> > is heavy. It would be troublesome.
>> >
>> > This patch adds new knob "mem_used_max" so user could see
>> > the maximum memory usage easily via reading the knob and reset
>> > it via "echo 0 > /sys/block/zram0/mem_used_max".
>> >
>> > Signed-off-by: Minchan Kim <minchan@kernel.org>
>> > ---
>> >  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
>> >  Documentation/blockdev/zram.txt            |  1 +
>> >  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
>> >  drivers/block/zram/zram_drv.h              |  1 +
>> >  4 files changed, 67 insertions(+), 2 deletions(-)
>> >
>> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
>> > index 025331c19045..ffd1ea7443dd 100644
>> > --- a/Documentation/ABI/testing/sysfs-block-zram
>> > +++ b/Documentation/ABI/testing/sysfs-block-zram
>> > @@ -120,6 +120,16 @@ Description:
>> >                 statistic.
>> >                 Unit: bytes
>> >
>> > +What:          /sys/block/zram<id>/mem_used_max
>> > +Date:          August 2014
>> > +Contact:       Minchan Kim <minchan@kernel.org>
>> > +Description:
>> > +               The mem_used_max file is read/write and specifies the amount
>> > +               of maximum memory zram have consumed to store compressed data.
>> > +               For resetting the value, you should do "echo 0". Otherwise,
>> > +               you could see -EINVAL.
>> > +               Unit: bytes
>> > +
>> >  What:          /sys/block/zram<id>/mem_limit
>> >  Date:          August 2014
>> >  Contact:       Minchan Kim <minchan@kernel.org>
>> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
>> > index 9f239ff8c444..3b2247c2d4cf 100644
>> > --- a/Documentation/blockdev/zram.txt
>> > +++ b/Documentation/blockdev/zram.txt
>> > @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
>> >                 orig_data_size
>> >                 compr_data_size
>> >                 mem_used_total
>> > +               mem_used_max
>> >
>> >  8) Deactivate:
>> >         swapoff /dev/zram0
>> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
>> > index adc91c7ecaef..138787579478 100644
>> > --- a/drivers/block/zram/zram_drv.c
>> > +++ b/drivers/block/zram/zram_drv.c
>> > @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
>> >         return len;
>> >  }
>> >
>> > +static ssize_t mem_used_max_show(struct device *dev,
>> > +               struct device_attribute *attr, char *buf)
>> > +{
>> > +       u64 val = 0;
>> > +       struct zram *zram = dev_to_zram(dev);
>> > +
>> > +       down_read(&zram->init_lock);
>> > +       if (init_done(zram))
>> > +               val = atomic64_read(&zram->stats.max_used_pages);
>> > +       up_read(&zram->init_lock);
>> > +
>> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
>> > +}
>> > +
>> > +static ssize_t mem_used_max_store(struct device *dev,
>> > +               struct device_attribute *attr, const char *buf, size_t len)
>> > +{
>> > +       int err;
>> > +       unsigned long val;
>> > +       struct zram *zram = dev_to_zram(dev);
>> > +       struct zram_meta *meta = zram->meta;
>> > +
>> > +       err = kstrtoul(buf, 10, &val);
>> > +       if (err || val != 0)
>> > +               return -EINVAL;
>> > +
>>
>> Yes - this works better for the user than explicit single "0" check
>> Thanks for testing.
>>
>> > +       down_read(&zram->init_lock);
>> > +       if (init_done(zram))
>> > +               atomic64_set(&zram->stats.max_used_pages,
>> > +                               zs_get_total_size(meta->mem_pool));
>> > +       up_read(&zram->init_lock);
>> > +
>> > +       return len;
>> > +}
>> > +
>> >  static ssize_t max_comp_streams_store(struct device *dev,
>> >                 struct device_attribute *attr, const char *buf, size_t len)
>> >  {
>> > @@ -461,6 +496,18 @@ out_cleanup:
>> >         return ret;
>> >  }
>> >
>> > +static inline void update_used_max(struct zram *zram, const unsigned long pages)
>> > +{
>> > +       u64 old_max, cur_max;
>> > +
>> > +       do {
>> > +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
>> > +               if (pages > cur_max)
>> > +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
>> > +                                       cur_max, pages);
>> > +       } while (old_max != cur_max);
>> > +}
>> > +
>>
>> This can be tightened up some:
>
> How many does it make tight?
> If it's not a big, I'd like to stick my version.

It is another atomic (memory barrier) read each time through the loop.

But the expect usual case will be the uncontested case - a drop through.

It would only show up in heavily contested situations -

this is another variation that removes the unnecessary read
 but retains the programming structure:and naming:

+static inline void update_used_max(struct zram *zram, const unsigned
long pages)
 +{
 +       u64 old_max, cur_max;
 +
 +       old_max = atomic64_read(&zram->stats.max_used_pages);
 +       do {
 +               cur_max = old_max;
 +               if (pages > cur_max)
 +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
 +                                       cur_max, pages);
 +       } while (old_max != cur_max);
 +}
 +


>
>>
>> +static inline void update_used_max(struct zram *zram, const unsigned
>> long pages)
>> +{
>> +       u64 prev_max, old_max = 0;
>> +
>> +       prev_max = atomic64_read(&zram->stats.max_used_pages);
>> +       do while (pages > prev_max && prev_max != old_max) {
>> +               old_max = prev_max;
>> +               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
>> +                                       old_max, pages);
>> +       };
>> +}
>> +
>>
>> And then can be generalized to:
>
> At the moment, we don't need it but it would be good if we need another
> max in future so I will keep in mind. Thanks for the suggestion!
>
>>
>>
>> +static inline void update_max(uint64_t *addr, const uint64_t newvalue)
>> +{
>> +       uint64_t prev_max, old_max = 0;
>> +
>> +       prev_max = atomic64_read(addr);
>> +       do while (pages > prev_max && prev_max != old_max) {
>> +               old_max = prev_max;
>> +               prev_max = atomic64_cmpxchg(addr,
>> +                                       old_max, newvalue);
>> +       };
>> +}
>> +
>>
>>
>> Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).
>>
>> (I hope I got all the datatypes (I understand there are some implicit
>> casts) and modes correct).
>>
>> The idea should be clear.
>>
>> >  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> >                            int offset)
>> >  {
>> > @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> >         struct zram_meta *meta = zram->meta;
>> >         struct zcomp_strm *zstrm;
>> >         bool locked = false;
>> > +       unsigned long alloced_pages;
>> >
>> >         page = bvec->bv_page;
>> >         if (is_partial_io(bvec)) {
>> > @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> >                 goto out;
>> >         }
>> >
>> > -       if (zram->limit_pages &&
>> > -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
>> > +       alloced_pages = zs_get_total_size(meta->mem_pool);
>> > +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
>> >                 zs_free(meta->mem_pool, handle);
>> >                 ret = -ENOMEM;
>> >                 goto out;
>> >         }
>> >
>> > +       update_used_max(zram, alloced_pages);
>> > +
>>
>> or generalized with update_max
>>
>> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
>> >
>> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
>> > @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
>> >  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
>> >                 mem_limit_store);
>> > +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
>> > +               mem_used_max_store);
>> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>> >                 max_comp_streams_show, max_comp_streams_store);
>> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
>> > @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
>> >         &dev_attr_compr_data_size.attr,
>> >         &dev_attr_mem_used_total.attr,
>> >         &dev_attr_mem_limit.attr,
>> > +       &dev_attr_mem_used_max.attr,
>> >         &dev_attr_max_comp_streams.attr,
>> >         &dev_attr_comp_algorithm.attr,
>> >         NULL,
>> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
>> > index b7aa9c21553f..29383312d543 100644
>> > --- a/drivers/block/zram/zram_drv.h
>> > +++ b/drivers/block/zram/zram_drv.h
>> > @@ -90,6 +90,7 @@ struct zram_stats {
>> >         atomic64_t notify_free; /* no. of swap slot free notifications */
>> >         atomic64_t zero_pages;          /* no. of zero filled pages */
>> >         atomic64_t pages_stored;        /* no. of pages currently stored */
>> > +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
>> >  };
>> >
>> >  struct zram_meta {
>> > --
>> > 2.0.0
>> >
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
> --
> Kind regards,
> Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
  2014-08-21  3:03         ` David Horner
@ 2014-08-21  4:33           ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  4:33 UTC (permalink / raw)
  To: David Horner
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 11:03:06PM -0400, David Horner wrote:
> On Wed, Aug 20, 2014 at 10:41 PM, Minchan Kim <minchan@kernel.org> wrote:
> > On Wed, Aug 20, 2014 at 10:20:07PM -0400, David Horner wrote:
> >> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> >> > Normally, zram user could get maximum memory usage zram consumed
> >> > via polling mem_used_total with sysfs in userspace.
> >> >
> >> > But it has a critical problem because user can miss peak memory
> >> > usage during update inverval of polling. For avoiding that,
> >> > user should poll it with shorter interval(ie, 0.0000000001s)
> >> > with mlocking to avoid page fault delay when memory pressure
> >> > is heavy. It would be troublesome.
> >> >
> >> > This patch adds new knob "mem_used_max" so user could see
> >> > the maximum memory usage easily via reading the knob and reset
> >> > it via "echo 0 > /sys/block/zram0/mem_used_max".
> >> >
> >> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> >> > ---
> >> >  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
> >> >  Documentation/blockdev/zram.txt            |  1 +
> >> >  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
> >> >  drivers/block/zram/zram_drv.h              |  1 +
> >> >  4 files changed, 67 insertions(+), 2 deletions(-)
> >> >
> >> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> >> > index 025331c19045..ffd1ea7443dd 100644
> >> > --- a/Documentation/ABI/testing/sysfs-block-zram
> >> > +++ b/Documentation/ABI/testing/sysfs-block-zram
> >> > @@ -120,6 +120,16 @@ Description:
> >> >                 statistic.
> >> >                 Unit: bytes
> >> >
> >> > +What:          /sys/block/zram<id>/mem_used_max
> >> > +Date:          August 2014
> >> > +Contact:       Minchan Kim <minchan@kernel.org>
> >> > +Description:
> >> > +               The mem_used_max file is read/write and specifies the amount
> >> > +               of maximum memory zram have consumed to store compressed data.
> >> > +               For resetting the value, you should do "echo 0". Otherwise,
> >> > +               you could see -EINVAL.
> >> > +               Unit: bytes
> >> > +
> >> >  What:          /sys/block/zram<id>/mem_limit
> >> >  Date:          August 2014
> >> >  Contact:       Minchan Kim <minchan@kernel.org>
> >> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> >> > index 9f239ff8c444..3b2247c2d4cf 100644
> >> > --- a/Documentation/blockdev/zram.txt
> >> > +++ b/Documentation/blockdev/zram.txt
> >> > @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
> >> >                 orig_data_size
> >> >                 compr_data_size
> >> >                 mem_used_total
> >> > +               mem_used_max
> >> >
> >> >  8) Deactivate:
> >> >         swapoff /dev/zram0
> >> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> >> > index adc91c7ecaef..138787579478 100644
> >> > --- a/drivers/block/zram/zram_drv.c
> >> > +++ b/drivers/block/zram/zram_drv.c
> >> > @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
> >> >         return len;
> >> >  }
> >> >
> >> > +static ssize_t mem_used_max_show(struct device *dev,
> >> > +               struct device_attribute *attr, char *buf)
> >> > +{
> >> > +       u64 val = 0;
> >> > +       struct zram *zram = dev_to_zram(dev);
> >> > +
> >> > +       down_read(&zram->init_lock);
> >> > +       if (init_done(zram))
> >> > +               val = atomic64_read(&zram->stats.max_used_pages);
> >> > +       up_read(&zram->init_lock);
> >> > +
> >> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >> > +}
> >> > +
> >> > +static ssize_t mem_used_max_store(struct device *dev,
> >> > +               struct device_attribute *attr, const char *buf, size_t len)
> >> > +{
> >> > +       int err;
> >> > +       unsigned long val;
> >> > +       struct zram *zram = dev_to_zram(dev);
> >> > +       struct zram_meta *meta = zram->meta;
> >> > +
> >> > +       err = kstrtoul(buf, 10, &val);
> >> > +       if (err || val != 0)
> >> > +               return -EINVAL;
> >> > +
> >>
> >> Yes - this works better for the user than explicit single "0" check
> >> Thanks for testing.
> >>
> >> > +       down_read(&zram->init_lock);
> >> > +       if (init_done(zram))
> >> > +               atomic64_set(&zram->stats.max_used_pages,
> >> > +                               zs_get_total_size(meta->mem_pool));
> >> > +       up_read(&zram->init_lock);
> >> > +
> >> > +       return len;
> >> > +}
> >> > +
> >> >  static ssize_t max_comp_streams_store(struct device *dev,
> >> >                 struct device_attribute *attr, const char *buf, size_t len)
> >> >  {
> >> > @@ -461,6 +496,18 @@ out_cleanup:
> >> >         return ret;
> >> >  }
> >> >
> >> > +static inline void update_used_max(struct zram *zram, const unsigned long pages)
> >> > +{
> >> > +       u64 old_max, cur_max;
> >> > +
> >> > +       do {
> >> > +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
> >> > +               if (pages > cur_max)
> >> > +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> >> > +                                       cur_max, pages);
> >> > +       } while (old_max != cur_max);
> >> > +}
> >> > +
> >>
> >> This can be tightened up some:
> >
> > How many does it make tight?
> > If it's not a big, I'd like to stick my version.
> 
> It is another atomic (memory barrier) read each time through the loop.
> 
> But the expect usual case will be the uncontested case - a drop through.
> 
> It would only show up in heavily contested situations -
> 
> this is another variation that removes the unnecessary read
>  but retains the programming structure:and naming:

True. I will pick it up,
Thanks!

> 
> +static inline void update_used_max(struct zram *zram, const unsigned
> long pages)
>  +{
>  +       u64 old_max, cur_max;
>  +
>  +       old_max = atomic64_read(&zram->stats.max_used_pages);
>  +       do {
>  +               cur_max = old_max;
>  +               if (pages > cur_max)
>  +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
>  +                                       cur_max, pages);
>  +       } while (old_max != cur_max);
>  +}
>  +
> 
> 
> >
> >>
> >> +static inline void update_used_max(struct zram *zram, const unsigned
> >> long pages)
> >> +{
> >> +       u64 prev_max, old_max = 0;
> >> +
> >> +       prev_max = atomic64_read(&zram->stats.max_used_pages);
> >> +       do while (pages > prev_max && prev_max != old_max) {
> >> +               old_max = prev_max;
> >> +               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> >> +                                       old_max, pages);
> >> +       };
> >> +}
> >> +
> >>
> >> And then can be generalized to:
> >
> > At the moment, we don't need it but it would be good if we need another
> > max in future so I will keep in mind. Thanks for the suggestion!
> >
> >>
> >>
> >> +static inline void update_max(uint64_t *addr, const uint64_t newvalue)
> >> +{
> >> +       uint64_t prev_max, old_max = 0;
> >> +
> >> +       prev_max = atomic64_read(addr);
> >> +       do while (pages > prev_max && prev_max != old_max) {
> >> +               old_max = prev_max;
> >> +               prev_max = atomic64_cmpxchg(addr,
> >> +                                       old_max, newvalue);
> >> +       };
> >> +}
> >> +
> >>
> >>
> >> Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).
> >>
> >> (I hope I got all the datatypes (I understand there are some implicit
> >> casts) and modes correct).
> >>
> >> The idea should be clear.
> >>
> >> >  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >> >                            int offset)
> >> >  {
> >> > @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >> >         struct zram_meta *meta = zram->meta;
> >> >         struct zcomp_strm *zstrm;
> >> >         bool locked = false;
> >> > +       unsigned long alloced_pages;
> >> >
> >> >         page = bvec->bv_page;
> >> >         if (is_partial_io(bvec)) {
> >> > @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >> >                 goto out;
> >> >         }
> >> >
> >> > -       if (zram->limit_pages &&
> >> > -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> >> > +       alloced_pages = zs_get_total_size(meta->mem_pool);
> >> > +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
> >> >                 zs_free(meta->mem_pool, handle);
> >> >                 ret = -ENOMEM;
> >> >                 goto out;
> >> >         }
> >> >
> >> > +       update_used_max(zram, alloced_pages);
> >> > +
> >>
> >> or generalized with update_max
> >>
> >> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
> >> >
> >> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> >> > @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
> >> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> >> >  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> >> >                 mem_limit_store);
> >> > +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
> >> > +               mem_used_max_store);
> >> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
> >> >                 max_comp_streams_show, max_comp_streams_store);
> >> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> >> > @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
> >> >         &dev_attr_compr_data_size.attr,
> >> >         &dev_attr_mem_used_total.attr,
> >> >         &dev_attr_mem_limit.attr,
> >> > +       &dev_attr_mem_used_max.attr,
> >> >         &dev_attr_max_comp_streams.attr,
> >> >         &dev_attr_comp_algorithm.attr,
> >> >         NULL,
> >> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> >> > index b7aa9c21553f..29383312d543 100644
> >> > --- a/drivers/block/zram/zram_drv.h
> >> > +++ b/drivers/block/zram/zram_drv.h
> >> > @@ -90,6 +90,7 @@ struct zram_stats {
> >> >         atomic64_t notify_free; /* no. of swap slot free notifications */
> >> >         atomic64_t zero_pages;          /* no. of zero filled pages */
> >> >         atomic64_t pages_stored;        /* no. of pages currently stored */
> >> > +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
> >> >  };
> >> >
> >> >  struct zram_meta {
> >> > --
> >> > 2.0.0
> >> >
> >>
> >> --
> >> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> >> the body to majordomo@kvack.org.  For more info on Linux MM,
> >> see: http://www.linux-mm.org/ .
> >> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> >
> > --
> > Kind regards,
> > Minchan Kim
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 4/4] zram: report maximum used memory
@ 2014-08-21  4:33           ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21  4:33 UTC (permalink / raw)
  To: David Horner
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, Dan Streetman

On Wed, Aug 20, 2014 at 11:03:06PM -0400, David Horner wrote:
> On Wed, Aug 20, 2014 at 10:41 PM, Minchan Kim <minchan@kernel.org> wrote:
> > On Wed, Aug 20, 2014 at 10:20:07PM -0400, David Horner wrote:
> >> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> >> > Normally, zram user could get maximum memory usage zram consumed
> >> > via polling mem_used_total with sysfs in userspace.
> >> >
> >> > But it has a critical problem because user can miss peak memory
> >> > usage during update inverval of polling. For avoiding that,
> >> > user should poll it with shorter interval(ie, 0.0000000001s)
> >> > with mlocking to avoid page fault delay when memory pressure
> >> > is heavy. It would be troublesome.
> >> >
> >> > This patch adds new knob "mem_used_max" so user could see
> >> > the maximum memory usage easily via reading the knob and reset
> >> > it via "echo 0 > /sys/block/zram0/mem_used_max".
> >> >
> >> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> >> > ---
> >> >  Documentation/ABI/testing/sysfs-block-zram | 10 ++++++
> >> >  Documentation/blockdev/zram.txt            |  1 +
> >> >  drivers/block/zram/zram_drv.c              | 57 ++++++++++++++++++++++++++++--
> >> >  drivers/block/zram/zram_drv.h              |  1 +
> >> >  4 files changed, 67 insertions(+), 2 deletions(-)
> >> >
> >> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> >> > index 025331c19045..ffd1ea7443dd 100644
> >> > --- a/Documentation/ABI/testing/sysfs-block-zram
> >> > +++ b/Documentation/ABI/testing/sysfs-block-zram
> >> > @@ -120,6 +120,16 @@ Description:
> >> >                 statistic.
> >> >                 Unit: bytes
> >> >
> >> > +What:          /sys/block/zram<id>/mem_used_max
> >> > +Date:          August 2014
> >> > +Contact:       Minchan Kim <minchan@kernel.org>
> >> > +Description:
> >> > +               The mem_used_max file is read/write and specifies the amount
> >> > +               of maximum memory zram have consumed to store compressed data.
> >> > +               For resetting the value, you should do "echo 0". Otherwise,
> >> > +               you could see -EINVAL.
> >> > +               Unit: bytes
> >> > +
> >> >  What:          /sys/block/zram<id>/mem_limit
> >> >  Date:          August 2014
> >> >  Contact:       Minchan Kim <minchan@kernel.org>
> >> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> >> > index 9f239ff8c444..3b2247c2d4cf 100644
> >> > --- a/Documentation/blockdev/zram.txt
> >> > +++ b/Documentation/blockdev/zram.txt
> >> > @@ -107,6 +107,7 @@ size of the disk when not in use so a huge zram is wasteful.
> >> >                 orig_data_size
> >> >                 compr_data_size
> >> >                 mem_used_total
> >> > +               mem_used_max
> >> >
> >> >  8) Deactivate:
> >> >         swapoff /dev/zram0
> >> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> >> > index adc91c7ecaef..138787579478 100644
> >> > --- a/drivers/block/zram/zram_drv.c
> >> > +++ b/drivers/block/zram/zram_drv.c
> >> > @@ -149,6 +149,41 @@ static ssize_t mem_limit_store(struct device *dev,
> >> >         return len;
> >> >  }
> >> >
> >> > +static ssize_t mem_used_max_show(struct device *dev,
> >> > +               struct device_attribute *attr, char *buf)
> >> > +{
> >> > +       u64 val = 0;
> >> > +       struct zram *zram = dev_to_zram(dev);
> >> > +
> >> > +       down_read(&zram->init_lock);
> >> > +       if (init_done(zram))
> >> > +               val = atomic64_read(&zram->stats.max_used_pages);
> >> > +       up_read(&zram->init_lock);
> >> > +
> >> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >> > +}
> >> > +
> >> > +static ssize_t mem_used_max_store(struct device *dev,
> >> > +               struct device_attribute *attr, const char *buf, size_t len)
> >> > +{
> >> > +       int err;
> >> > +       unsigned long val;
> >> > +       struct zram *zram = dev_to_zram(dev);
> >> > +       struct zram_meta *meta = zram->meta;
> >> > +
> >> > +       err = kstrtoul(buf, 10, &val);
> >> > +       if (err || val != 0)
> >> > +               return -EINVAL;
> >> > +
> >>
> >> Yes - this works better for the user than explicit single "0" check
> >> Thanks for testing.
> >>
> >> > +       down_read(&zram->init_lock);
> >> > +       if (init_done(zram))
> >> > +               atomic64_set(&zram->stats.max_used_pages,
> >> > +                               zs_get_total_size(meta->mem_pool));
> >> > +       up_read(&zram->init_lock);
> >> > +
> >> > +       return len;
> >> > +}
> >> > +
> >> >  static ssize_t max_comp_streams_store(struct device *dev,
> >> >                 struct device_attribute *attr, const char *buf, size_t len)
> >> >  {
> >> > @@ -461,6 +496,18 @@ out_cleanup:
> >> >         return ret;
> >> >  }
> >> >
> >> > +static inline void update_used_max(struct zram *zram, const unsigned long pages)
> >> > +{
> >> > +       u64 old_max, cur_max;
> >> > +
> >> > +       do {
> >> > +               old_max = cur_max = atomic64_read(&zram->stats.max_used_pages);
> >> > +               if (pages > cur_max)
> >> > +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> >> > +                                       cur_max, pages);
> >> > +       } while (old_max != cur_max);
> >> > +}
> >> > +
> >>
> >> This can be tightened up some:
> >
> > How many does it make tight?
> > If it's not a big, I'd like to stick my version.
> 
> It is another atomic (memory barrier) read each time through the loop.
> 
> But the expect usual case will be the uncontested case - a drop through.
> 
> It would only show up in heavily contested situations -
> 
> this is another variation that removes the unnecessary read
>  but retains the programming structure:and naming:

True. I will pick it up,
Thanks!

> 
> +static inline void update_used_max(struct zram *zram, const unsigned
> long pages)
>  +{
>  +       u64 old_max, cur_max;
>  +
>  +       old_max = atomic64_read(&zram->stats.max_used_pages);
>  +       do {
>  +               cur_max = old_max;
>  +               if (pages > cur_max)
>  +                       old_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
>  +                                       cur_max, pages);
>  +       } while (old_max != cur_max);
>  +}
>  +
> 
> 
> >
> >>
> >> +static inline void update_used_max(struct zram *zram, const unsigned
> >> long pages)
> >> +{
> >> +       u64 prev_max, old_max = 0;
> >> +
> >> +       prev_max = atomic64_read(&zram->stats.max_used_pages);
> >> +       do while (pages > prev_max && prev_max != old_max) {
> >> +               old_max = prev_max;
> >> +               prev_max = atomic64_cmpxchg(&zram->stats.max_used_pages,
> >> +                                       old_max, pages);
> >> +       };
> >> +}
> >> +
> >>
> >> And then can be generalized to:
> >
> > At the moment, we don't need it but it would be good if we need another
> > max in future so I will keep in mind. Thanks for the suggestion!
> >
> >>
> >>
> >> +static inline void update_max(uint64_t *addr, const uint64_t newvalue)
> >> +{
> >> +       uint64_t prev_max, old_max = 0;
> >> +
> >> +       prev_max = atomic64_read(addr);
> >> +       do while (pages > prev_max && prev_max != old_max) {
> >> +               old_max = prev_max;
> >> +               prev_max = atomic64_cmpxchg(addr,
> >> +                                       old_max, newvalue);
> >> +       };
> >> +}
> >> +
> >>
> >>
> >> Called with : update_max(&zram->stats.max_used_pages,  alloced_pages).
> >>
> >> (I hope I got all the datatypes (I understand there are some implicit
> >> casts) and modes correct).
> >>
> >> The idea should be clear.
> >>
> >> >  static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >> >                            int offset)
> >> >  {
> >> > @@ -472,6 +519,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >> >         struct zram_meta *meta = zram->meta;
> >> >         struct zcomp_strm *zstrm;
> >> >         bool locked = false;
> >> > +       unsigned long alloced_pages;
> >> >
> >> >         page = bvec->bv_page;
> >> >         if (is_partial_io(bvec)) {
> >> > @@ -541,13 +589,15 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >> >                 goto out;
> >> >         }
> >> >
> >> > -       if (zram->limit_pages &&
> >> > -               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> >> > +       alloced_pages = zs_get_total_size(meta->mem_pool);
> >> > +       if (zram->limit_pages && alloced_pages > zram->limit_pages) {
> >> >                 zs_free(meta->mem_pool, handle);
> >> >                 ret = -ENOMEM;
> >> >                 goto out;
> >> >         }
> >> >
> >> > +       update_used_max(zram, alloced_pages);
> >> > +
> >>
> >> or generalized with update_max
> >>
> >> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
> >> >
> >> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> >> > @@ -897,6 +947,8 @@ static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
> >> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> >> >  static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> >> >                 mem_limit_store);
> >> > +static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show,
> >> > +               mem_used_max_store);
> >> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
> >> >                 max_comp_streams_show, max_comp_streams_store);
> >> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> >> > @@ -926,6 +978,7 @@ static struct attribute *zram_disk_attrs[] = {
> >> >         &dev_attr_compr_data_size.attr,
> >> >         &dev_attr_mem_used_total.attr,
> >> >         &dev_attr_mem_limit.attr,
> >> > +       &dev_attr_mem_used_max.attr,
> >> >         &dev_attr_max_comp_streams.attr,
> >> >         &dev_attr_comp_algorithm.attr,
> >> >         NULL,
> >> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> >> > index b7aa9c21553f..29383312d543 100644
> >> > --- a/drivers/block/zram/zram_drv.h
> >> > +++ b/drivers/block/zram/zram_drv.h
> >> > @@ -90,6 +90,7 @@ struct zram_stats {
> >> >         atomic64_t notify_free; /* no. of swap slot free notifications */
> >> >         atomic64_t zero_pages;          /* no. of zero filled pages */
> >> >         atomic64_t pages_stored;        /* no. of pages currently stored */
> >> > +       atomic64_t max_used_pages;      /* no. of maximum pages stored */
> >> >  };
> >> >
> >> >  struct zram_meta {
> >> > --
> >> > 2.0.0
> >> >
> >>
> >> --
> >> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> >> the body to majordomo@kvack.org.  For more info on Linux MM,
> >> see: http://www.linux-mm.org/ .
> >> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> >
> > --
> > Kind regards,
> > Minchan Kim
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
  2014-08-21  0:27   ` Minchan Kim
@ 2014-08-21 18:53     ` Dan Streetman
  -1 siblings, 0 replies; 30+ messages in thread
From: Dan Streetman @ 2014-08-21 18:53 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> zs_get_total_size_bytes returns a amount of memory zsmalloc
> consumed with *byte unit* but zsmalloc operates *page unit*
> rather than byte unit so let's change the API so benefit
> we could get is that reduce unnecessary overhead
> (ie, change page unit with byte unit) in zsmalloc.
>
> Now, zswap can rollback to zswap_pool_pages.
> Over to zswap guys ;-)

We could change zpool/zswap over to total pages instead of total
bytes, since both zbud and zsmalloc now report size in pages.  The
only downside would be if either changed later to not use only whole
pages (or if they start using huge pages for storage...), but for what
they do that seems unlikely.  After this patch is finalized I can
write up a quick patch unless Seth disagrees (or already has a patch
:)

>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  drivers/block/zram/zram_drv.c |  4 ++--
>  include/linux/zsmalloc.h      |  2 +-
>  mm/zsmalloc.c                 | 10 +++++-----
>  3 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index d00831c3d731..302dd37bcea3 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
>
>         down_read(&zram->init_lock);
>         if (init_done(zram))
> -               val = zs_get_total_size_bytes(meta->mem_pool);
> +               val = zs_get_total_size(meta->mem_pool);
>         up_read(&zram->init_lock);
>
> -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
>  }
>
>  static ssize_t max_comp_streams_show(struct device *dev,
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index e44d634e7fb7..105b56e45d23 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>                         enum zs_mapmode mm);
>  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>
> -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> +unsigned long zs_get_total_size(struct zs_pool *pool);

minor naming suggestion, but since the name is changing anyway,
"zs_get_total_size" implies to me the units are bytes, would
"zs_get_total_pages" be clearer that it's returning size in # of
pages, not bytes?

>
>  #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index a65924255763..80408a1da03a 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
>
>  static u64 zs_zpool_total_size(void *pool)
>  {
> -       return zs_get_total_size_bytes(pool);
> +       return zs_get_total_size(pool) << PAGE_SHIFT;
>  }
>
>  static struct zpool_driver zs_zpool_driver = {
> @@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
>  }
>  EXPORT_SYMBOL_GPL(zs_unmap_object);
>
> -u64 zs_get_total_size_bytes(struct zs_pool *pool)
> +unsigned long zs_get_total_size(struct zs_pool *pool)
>  {
> -       u64 npages;
> +       unsigned long npages;
>
>         spin_lock(&pool->stat_lock);
>         npages = pool->pages_allocated;
>         spin_unlock(&pool->stat_lock);
> -       return npages << PAGE_SHIFT;
> +       return npages;
>  }
> -EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> +EXPORT_SYMBOL_GPL(zs_get_total_size);
>
>  module_init(zs_init);
>  module_exit(zs_exit);
> --
> 2.0.0
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
@ 2014-08-21 18:53     ` Dan Streetman
  0 siblings, 0 replies; 30+ messages in thread
From: Dan Streetman @ 2014-08-21 18:53 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> zs_get_total_size_bytes returns a amount of memory zsmalloc
> consumed with *byte unit* but zsmalloc operates *page unit*
> rather than byte unit so let's change the API so benefit
> we could get is that reduce unnecessary overhead
> (ie, change page unit with byte unit) in zsmalloc.
>
> Now, zswap can rollback to zswap_pool_pages.
> Over to zswap guys ;-)

We could change zpool/zswap over to total pages instead of total
bytes, since both zbud and zsmalloc now report size in pages.  The
only downside would be if either changed later to not use only whole
pages (or if they start using huge pages for storage...), but for what
they do that seems unlikely.  After this patch is finalized I can
write up a quick patch unless Seth disagrees (or already has a patch
:)

>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  drivers/block/zram/zram_drv.c |  4 ++--
>  include/linux/zsmalloc.h      |  2 +-
>  mm/zsmalloc.c                 | 10 +++++-----
>  3 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index d00831c3d731..302dd37bcea3 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
>
>         down_read(&zram->init_lock);
>         if (init_done(zram))
> -               val = zs_get_total_size_bytes(meta->mem_pool);
> +               val = zs_get_total_size(meta->mem_pool);
>         up_read(&zram->init_lock);
>
> -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
>  }
>
>  static ssize_t max_comp_streams_show(struct device *dev,
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index e44d634e7fb7..105b56e45d23 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>                         enum zs_mapmode mm);
>  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>
> -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> +unsigned long zs_get_total_size(struct zs_pool *pool);

minor naming suggestion, but since the name is changing anyway,
"zs_get_total_size" implies to me the units are bytes, would
"zs_get_total_pages" be clearer that it's returning size in # of
pages, not bytes?

>
>  #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index a65924255763..80408a1da03a 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
>
>  static u64 zs_zpool_total_size(void *pool)
>  {
> -       return zs_get_total_size_bytes(pool);
> +       return zs_get_total_size(pool) << PAGE_SHIFT;
>  }
>
>  static struct zpool_driver zs_zpool_driver = {
> @@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
>  }
>  EXPORT_SYMBOL_GPL(zs_unmap_object);
>
> -u64 zs_get_total_size_bytes(struct zs_pool *pool)
> +unsigned long zs_get_total_size(struct zs_pool *pool)
>  {
> -       u64 npages;
> +       unsigned long npages;
>
>         spin_lock(&pool->stat_lock);
>         npages = pool->pages_allocated;
>         spin_unlock(&pool->stat_lock);
> -       return npages << PAGE_SHIFT;
> +       return npages;
>  }
> -EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> +EXPORT_SYMBOL_GPL(zs_get_total_size);
>
>  module_init(zs_init);
>  module_exit(zs_exit);
> --
> 2.0.0
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 3/4] zram: zram memory size limitation
  2014-08-21  0:27   ` Minchan Kim
@ 2014-08-21 19:08     ` Dan Streetman
  -1 siblings, 0 replies; 30+ messages in thread
From: Dan Streetman @ 2014-08-21 19:08 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> Since zram has no control feature to limit memory usage,
> it makes hard to manage system memrory.
>
> This patch adds new knob "mem_limit" via sysfs to set up the
> a limit so that zram could fail allocation once it reaches
> the limit.
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  Documentation/ABI/testing/sysfs-block-zram |  9 +++++++
>  Documentation/blockdev/zram.txt            | 20 ++++++++++++---
>  drivers/block/zram/zram_drv.c              | 41 ++++++++++++++++++++++++++++++
>  drivers/block/zram/zram_drv.h              |  5 ++++
>  4 files changed, 71 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> index 70ec992514d0..025331c19045 100644
> --- a/Documentation/ABI/testing/sysfs-block-zram
> +++ b/Documentation/ABI/testing/sysfs-block-zram
> @@ -119,3 +119,12 @@ Description:
>                 efficiency can be calculated using compr_data_size and this
>                 statistic.
>                 Unit: bytes
> +
> +What:          /sys/block/zram<id>/mem_limit
> +Date:          August 2014
> +Contact:       Minchan Kim <minchan@kernel.org>
> +Description:
> +               The mem_limit file is read/write and specifies the amount
> +               of memory to be able to consume memory to store store
> +               compressed data.

might want to clarify here that the value "0", which is the default,
disables the limit.

> +               Unit: bytes
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index 0595c3f56ccf..9f239ff8c444 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -74,14 +74,26 @@ There is little point creating a zram of greater than twice the size of memory
>  since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
>  size of the disk when not in use so a huge zram is wasteful.
>
> -5) Activate:
> +5) Set memory limit: Optional
> +       Set memory limit by writing the value to sysfs node 'mem_limit'.
> +       The value can be either in bytes or you can use mem suffixes.
> +       Examples:
> +           # limit /dev/zram0 with 50MB memory
> +           echo $((50*1024*1024)) > /sys/block/zram0/mem_limit
> +
> +           # Using mem suffixes
> +           echo 256K > /sys/block/zram0/mem_limit
> +           echo 512M > /sys/block/zram0/mem_limit
> +           echo 1G > /sys/block/zram0/mem_limit

# To disable memory limit
echo 0 > /sys/block/zram0/mem_limit

> +
> +6) Activate:
>         mkswap /dev/zram0
>         swapon /dev/zram0
>
>         mkfs.ext4 /dev/zram1
>         mount /dev/zram1 /tmp
>
> -6) Stats:
> +7) Stats:
>         Per-device statistics are exported as various nodes under
>         /sys/block/zram<id>/
>                 disksize
> @@ -96,11 +108,11 @@ size of the disk when not in use so a huge zram is wasteful.
>                 compr_data_size
>                 mem_used_total
>
> -7) Deactivate:
> +8) Deactivate:
>         swapoff /dev/zram0
>         umount /dev/zram1
>
> -8) Reset:
> +9) Reset:
>         Write any positive value to 'reset' sysfs node
>         echo 1 > /sys/block/zram0/reset
>         echo 1 > /sys/block/zram1/reset
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 302dd37bcea3..adc91c7ecaef 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -122,6 +122,33 @@ static ssize_t max_comp_streams_show(struct device *dev,
>         return scnprintf(buf, PAGE_SIZE, "%d\n", val);
>  }
>
> +static ssize_t mem_limit_show(struct device *dev,
> +               struct device_attribute *attr, char *buf)
> +{
> +       u64 val;
> +       struct zram *zram = dev_to_zram(dev);
> +
> +       down_read(&zram->init_lock);
> +       val = zram->limit_pages;
> +       up_read(&zram->init_lock);
> +
> +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> +}
> +
> +static ssize_t mem_limit_store(struct device *dev,
> +               struct device_attribute *attr, const char *buf, size_t len)
> +{
> +       u64 limit;
> +       struct zram *zram = dev_to_zram(dev);
> +
> +       limit = memparse(buf, NULL);
> +       down_write(&zram->init_lock);
> +       zram->limit_pages = PAGE_ALIGN(limit) >> PAGE_SHIFT;
> +       up_write(&zram->init_lock);
> +
> +       return len;
> +}
> +
>  static ssize_t max_comp_streams_store(struct device *dev,
>                 struct device_attribute *attr, const char *buf, size_t len)
>  {
> @@ -513,6 +540,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>                 ret = -ENOMEM;
>                 goto out;
>         }
> +
> +       if (zram->limit_pages &&
> +               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> +               zs_free(meta->mem_pool, handle);
> +               ret = -ENOMEM;
> +               goto out;
> +       }
> +
>         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
>
>         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> @@ -617,6 +652,9 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity)
>         struct zram_meta *meta;
>
>         down_write(&zram->init_lock);
> +
> +       zram->limit_pages = 0;
> +
>         if (!init_done(zram)) {
>                 up_write(&zram->init_lock);
>                 return;
> @@ -857,6 +895,8 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
>  static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
>  static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> +static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> +               mem_limit_store);
>  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>                 max_comp_streams_show, max_comp_streams_store);
>  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> @@ -885,6 +925,7 @@ static struct attribute *zram_disk_attrs[] = {
>         &dev_attr_orig_data_size.attr,
>         &dev_attr_compr_data_size.attr,
>         &dev_attr_mem_used_total.attr,
> +       &dev_attr_mem_limit.attr,
>         &dev_attr_max_comp_streams.attr,
>         &dev_attr_comp_algorithm.attr,
>         NULL,
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index e0f725c87cc6..b7aa9c21553f 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -112,6 +112,11 @@ struct zram {
>         u64 disksize;   /* bytes */
>         int max_comp_streams;
>         struct zram_stats stats;
> +       /*
> +        * the number of pages zram can consume for storing compressed data
> +        */
> +       unsigned long limit_pages;
> +
>         char compressor[10];
>  };
>  #endif
> --
> 2.0.0
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 3/4] zram: zram memory size limitation
@ 2014-08-21 19:08     ` Dan Streetman
  0 siblings, 0 replies; 30+ messages in thread
From: Dan Streetman @ 2014-08-21 19:08 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> Since zram has no control feature to limit memory usage,
> it makes hard to manage system memrory.
>
> This patch adds new knob "mem_limit" via sysfs to set up the
> a limit so that zram could fail allocation once it reaches
> the limit.
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  Documentation/ABI/testing/sysfs-block-zram |  9 +++++++
>  Documentation/blockdev/zram.txt            | 20 ++++++++++++---
>  drivers/block/zram/zram_drv.c              | 41 ++++++++++++++++++++++++++++++
>  drivers/block/zram/zram_drv.h              |  5 ++++
>  4 files changed, 71 insertions(+), 4 deletions(-)
>
> diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> index 70ec992514d0..025331c19045 100644
> --- a/Documentation/ABI/testing/sysfs-block-zram
> +++ b/Documentation/ABI/testing/sysfs-block-zram
> @@ -119,3 +119,12 @@ Description:
>                 efficiency can be calculated using compr_data_size and this
>                 statistic.
>                 Unit: bytes
> +
> +What:          /sys/block/zram<id>/mem_limit
> +Date:          August 2014
> +Contact:       Minchan Kim <minchan@kernel.org>
> +Description:
> +               The mem_limit file is read/write and specifies the amount
> +               of memory to be able to consume memory to store store
> +               compressed data.

might want to clarify here that the value "0", which is the default,
disables the limit.

> +               Unit: bytes
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index 0595c3f56ccf..9f239ff8c444 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -74,14 +74,26 @@ There is little point creating a zram of greater than twice the size of memory
>  since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
>  size of the disk when not in use so a huge zram is wasteful.
>
> -5) Activate:
> +5) Set memory limit: Optional
> +       Set memory limit by writing the value to sysfs node 'mem_limit'.
> +       The value can be either in bytes or you can use mem suffixes.
> +       Examples:
> +           # limit /dev/zram0 with 50MB memory
> +           echo $((50*1024*1024)) > /sys/block/zram0/mem_limit
> +
> +           # Using mem suffixes
> +           echo 256K > /sys/block/zram0/mem_limit
> +           echo 512M > /sys/block/zram0/mem_limit
> +           echo 1G > /sys/block/zram0/mem_limit

# To disable memory limit
echo 0 > /sys/block/zram0/mem_limit

> +
> +6) Activate:
>         mkswap /dev/zram0
>         swapon /dev/zram0
>
>         mkfs.ext4 /dev/zram1
>         mount /dev/zram1 /tmp
>
> -6) Stats:
> +7) Stats:
>         Per-device statistics are exported as various nodes under
>         /sys/block/zram<id>/
>                 disksize
> @@ -96,11 +108,11 @@ size of the disk when not in use so a huge zram is wasteful.
>                 compr_data_size
>                 mem_used_total
>
> -7) Deactivate:
> +8) Deactivate:
>         swapoff /dev/zram0
>         umount /dev/zram1
>
> -8) Reset:
> +9) Reset:
>         Write any positive value to 'reset' sysfs node
>         echo 1 > /sys/block/zram0/reset
>         echo 1 > /sys/block/zram1/reset
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 302dd37bcea3..adc91c7ecaef 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -122,6 +122,33 @@ static ssize_t max_comp_streams_show(struct device *dev,
>         return scnprintf(buf, PAGE_SIZE, "%d\n", val);
>  }
>
> +static ssize_t mem_limit_show(struct device *dev,
> +               struct device_attribute *attr, char *buf)
> +{
> +       u64 val;
> +       struct zram *zram = dev_to_zram(dev);
> +
> +       down_read(&zram->init_lock);
> +       val = zram->limit_pages;
> +       up_read(&zram->init_lock);
> +
> +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> +}
> +
> +static ssize_t mem_limit_store(struct device *dev,
> +               struct device_attribute *attr, const char *buf, size_t len)
> +{
> +       u64 limit;
> +       struct zram *zram = dev_to_zram(dev);
> +
> +       limit = memparse(buf, NULL);
> +       down_write(&zram->init_lock);
> +       zram->limit_pages = PAGE_ALIGN(limit) >> PAGE_SHIFT;
> +       up_write(&zram->init_lock);
> +
> +       return len;
> +}
> +
>  static ssize_t max_comp_streams_store(struct device *dev,
>                 struct device_attribute *attr, const char *buf, size_t len)
>  {
> @@ -513,6 +540,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>                 ret = -ENOMEM;
>                 goto out;
>         }
> +
> +       if (zram->limit_pages &&
> +               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> +               zs_free(meta->mem_pool, handle);
> +               ret = -ENOMEM;
> +               goto out;
> +       }
> +
>         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
>
>         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> @@ -617,6 +652,9 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity)
>         struct zram_meta *meta;
>
>         down_write(&zram->init_lock);
> +
> +       zram->limit_pages = 0;
> +
>         if (!init_done(zram)) {
>                 up_write(&zram->init_lock);
>                 return;
> @@ -857,6 +895,8 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
>  static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
>  static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> +static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> +               mem_limit_store);
>  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>                 max_comp_streams_show, max_comp_streams_store);
>  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> @@ -885,6 +925,7 @@ static struct attribute *zram_disk_attrs[] = {
>         &dev_attr_orig_data_size.attr,
>         &dev_attr_compr_data_size.attr,
>         &dev_attr_mem_used_total.attr,
> +       &dev_attr_mem_limit.attr,
>         &dev_attr_max_comp_streams.attr,
>         &dev_attr_comp_algorithm.attr,
>         NULL,
> diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> index e0f725c87cc6..b7aa9c21553f 100644
> --- a/drivers/block/zram/zram_drv.h
> +++ b/drivers/block/zram/zram_drv.h
> @@ -112,6 +112,11 @@ struct zram {
>         u64 disksize;   /* bytes */
>         int max_comp_streams;
>         struct zram_stats stats;
> +       /*
> +        * the number of pages zram can consume for storing compressed data
> +        */
> +       unsigned long limit_pages;
> +
>         char compressor[10];
>  };
>  #endif
> --
> 2.0.0
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
  2014-08-21 18:53     ` Dan Streetman
@ 2014-08-21 23:22       ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21 23:22 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

Hi Dan,

On Thu, Aug 21, 2014 at 02:53:57PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > zs_get_total_size_bytes returns a amount of memory zsmalloc
> > consumed with *byte unit* but zsmalloc operates *page unit*
> > rather than byte unit so let's change the API so benefit
> > we could get is that reduce unnecessary overhead
> > (ie, change page unit with byte unit) in zsmalloc.
> >
> > Now, zswap can rollback to zswap_pool_pages.
> > Over to zswap guys ;-)
> 
> We could change zpool/zswap over to total pages instead of total
> bytes, since both zbud and zsmalloc now report size in pages.  The
> only downside would be if either changed later to not use only whole
> pages (or if they start using huge pages for storage...), but for what
> they do that seems unlikely.  After this patch is finalized I can
> write up a quick patch unless Seth disagrees (or already has a patch
> :)

That's extactly what I mentioned zswap in this patch.
Now that you guys notice my intention, I will remove zswap part in
description in next revision.

Thanks.

> 
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  drivers/block/zram/zram_drv.c |  4 ++--
> >  include/linux/zsmalloc.h      |  2 +-
> >  mm/zsmalloc.c                 | 10 +++++-----
> >  3 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index d00831c3d731..302dd37bcea3 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
> >
> >         down_read(&zram->init_lock);
> >         if (init_done(zram))
> > -               val = zs_get_total_size_bytes(meta->mem_pool);
> > +               val = zs_get_total_size(meta->mem_pool);
> >         up_read(&zram->init_lock);
> >
> > -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >  }
> >
> >  static ssize_t max_comp_streams_show(struct device *dev,
> > diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > index e44d634e7fb7..105b56e45d23 100644
> > --- a/include/linux/zsmalloc.h
> > +++ b/include/linux/zsmalloc.h
> > @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >                         enum zs_mapmode mm);
> >  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > +unsigned long zs_get_total_size(struct zs_pool *pool);
> 
> minor naming suggestion, but since the name is changing anyway,
> "zs_get_total_size" implies to me the units are bytes, would
> "zs_get_total_pages" be clearer that it's returning size in # of
> pages, not bytes?
> 
> >
> >  #endif
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index a65924255763..80408a1da03a 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
> >
> >  static u64 zs_zpool_total_size(void *pool)
> >  {
> > -       return zs_get_total_size_bytes(pool);
> > +       return zs_get_total_size(pool) << PAGE_SHIFT;
> >  }
> >
> >  static struct zpool_driver zs_zpool_driver = {
> > @@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> >  }
> >  EXPORT_SYMBOL_GPL(zs_unmap_object);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool)
> > +unsigned long zs_get_total_size(struct zs_pool *pool)
> >  {
> > -       u64 npages;
> > +       unsigned long npages;
> >
> >         spin_lock(&pool->stat_lock);
> >         npages = pool->pages_allocated;
> >         spin_unlock(&pool->stat_lock);
> > -       return npages << PAGE_SHIFT;
> > +       return npages;
> >  }
> > -EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> > +EXPORT_SYMBOL_GPL(zs_get_total_size);
> >
> >  module_init(zs_init);
> >  module_exit(zs_exit);
> > --
> > 2.0.0
> >

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
@ 2014-08-21 23:22       ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21 23:22 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

Hi Dan,

On Thu, Aug 21, 2014 at 02:53:57PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > zs_get_total_size_bytes returns a amount of memory zsmalloc
> > consumed with *byte unit* but zsmalloc operates *page unit*
> > rather than byte unit so let's change the API so benefit
> > we could get is that reduce unnecessary overhead
> > (ie, change page unit with byte unit) in zsmalloc.
> >
> > Now, zswap can rollback to zswap_pool_pages.
> > Over to zswap guys ;-)
> 
> We could change zpool/zswap over to total pages instead of total
> bytes, since both zbud and zsmalloc now report size in pages.  The
> only downside would be if either changed later to not use only whole
> pages (or if they start using huge pages for storage...), but for what
> they do that seems unlikely.  After this patch is finalized I can
> write up a quick patch unless Seth disagrees (or already has a patch
> :)

That's extactly what I mentioned zswap in this patch.
Now that you guys notice my intention, I will remove zswap part in
description in next revision.

Thanks.

> 
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  drivers/block/zram/zram_drv.c |  4 ++--
> >  include/linux/zsmalloc.h      |  2 +-
> >  mm/zsmalloc.c                 | 10 +++++-----
> >  3 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index d00831c3d731..302dd37bcea3 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
> >
> >         down_read(&zram->init_lock);
> >         if (init_done(zram))
> > -               val = zs_get_total_size_bytes(meta->mem_pool);
> > +               val = zs_get_total_size(meta->mem_pool);
> >         up_read(&zram->init_lock);
> >
> > -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >  }
> >
> >  static ssize_t max_comp_streams_show(struct device *dev,
> > diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > index e44d634e7fb7..105b56e45d23 100644
> > --- a/include/linux/zsmalloc.h
> > +++ b/include/linux/zsmalloc.h
> > @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >                         enum zs_mapmode mm);
> >  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > +unsigned long zs_get_total_size(struct zs_pool *pool);
> 
> minor naming suggestion, but since the name is changing anyway,
> "zs_get_total_size" implies to me the units are bytes, would
> "zs_get_total_pages" be clearer that it's returning size in # of
> pages, not bytes?
> 
> >
> >  #endif
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index a65924255763..80408a1da03a 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
> >
> >  static u64 zs_zpool_total_size(void *pool)
> >  {
> > -       return zs_get_total_size_bytes(pool);
> > +       return zs_get_total_size(pool) << PAGE_SHIFT;
> >  }
> >
> >  static struct zpool_driver zs_zpool_driver = {
> > @@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> >  }
> >  EXPORT_SYMBOL_GPL(zs_unmap_object);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool)
> > +unsigned long zs_get_total_size(struct zs_pool *pool)
> >  {
> > -       u64 npages;
> > +       unsigned long npages;
> >
> >         spin_lock(&pool->stat_lock);
> >         npages = pool->pages_allocated;
> >         spin_unlock(&pool->stat_lock);
> > -       return npages << PAGE_SHIFT;
> > +       return npages;
> >  }
> > -EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> > +EXPORT_SYMBOL_GPL(zs_get_total_size);
> >
> >  module_init(zs_init);
> >  module_exit(zs_exit);
> > --
> > 2.0.0
> >

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
  2014-08-21 18:53     ` Dan Streetman
@ 2014-08-21 23:22       ` Seth Jennings
  -1 siblings, 0 replies; 30+ messages in thread
From: Seth Jennings @ 2014-08-21 23:22 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Minchan Kim, Andrew Morton, Linux-MM, linux-kernel,
	Sergey Senozhatsky, Jerome Marchand, juno.choi, seungho1.park,
	Luigi Semenzato, Nitin Gupta, David Horner

On Thu, Aug 21, 2014 at 02:53:57PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > zs_get_total_size_bytes returns a amount of memory zsmalloc
> > consumed with *byte unit* but zsmalloc operates *page unit*
> > rather than byte unit so let's change the API so benefit
> > we could get is that reduce unnecessary overhead
> > (ie, change page unit with byte unit) in zsmalloc.
> >
> > Now, zswap can rollback to zswap_pool_pages.
> > Over to zswap guys ;-)
> 
> We could change zpool/zswap over to total pages instead of total
> bytes, since both zbud and zsmalloc now report size in pages.  The
> only downside would be if either changed later to not use only whole
> pages (or if they start using huge pages for storage...), but for what
> they do that seems unlikely.  After this patch is finalized I can
> write up a quick patch unless Seth disagrees (or already has a patch
> :)

I agree that we should move everything (back) to pages.

I can write the patch or you can; doesn't matter to me.  I might have
one started on the previous version of this patchset where I erroneously
determined that Minchan had broken stuff :-/

Seth

> 
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  drivers/block/zram/zram_drv.c |  4 ++--
> >  include/linux/zsmalloc.h      |  2 +-
> >  mm/zsmalloc.c                 | 10 +++++-----
> >  3 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index d00831c3d731..302dd37bcea3 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
> >
> >         down_read(&zram->init_lock);
> >         if (init_done(zram))
> > -               val = zs_get_total_size_bytes(meta->mem_pool);
> > +               val = zs_get_total_size(meta->mem_pool);
> >         up_read(&zram->init_lock);
> >
> > -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >  }
> >
> >  static ssize_t max_comp_streams_show(struct device *dev,
> > diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > index e44d634e7fb7..105b56e45d23 100644
> > --- a/include/linux/zsmalloc.h
> > +++ b/include/linux/zsmalloc.h
> > @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >                         enum zs_mapmode mm);
> >  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > +unsigned long zs_get_total_size(struct zs_pool *pool);
> 
> minor naming suggestion, but since the name is changing anyway,
> "zs_get_total_size" implies to me the units are bytes, would
> "zs_get_total_pages" be clearer that it's returning size in # of
> pages, not bytes?
> 
> >
> >  #endif
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index a65924255763..80408a1da03a 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
> >
> >  static u64 zs_zpool_total_size(void *pool)
> >  {
> > -       return zs_get_total_size_bytes(pool);
> > +       return zs_get_total_size(pool) << PAGE_SHIFT;
> >  }
> >
> >  static struct zpool_driver zs_zpool_driver = {
> > @@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> >  }
> >  EXPORT_SYMBOL_GPL(zs_unmap_object);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool)
> > +unsigned long zs_get_total_size(struct zs_pool *pool)
> >  {
> > -       u64 npages;
> > +       unsigned long npages;
> >
> >         spin_lock(&pool->stat_lock);
> >         npages = pool->pages_allocated;
> >         spin_unlock(&pool->stat_lock);
> > -       return npages << PAGE_SHIFT;
> > +       return npages;
> >  }
> > -EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> > +EXPORT_SYMBOL_GPL(zs_get_total_size);
> >
> >  module_init(zs_init);
> >  module_exit(zs_exit);
> > --
> > 2.0.0
> >

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
@ 2014-08-21 23:22       ` Seth Jennings
  0 siblings, 0 replies; 30+ messages in thread
From: Seth Jennings @ 2014-08-21 23:22 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Minchan Kim, Andrew Morton, Linux-MM, linux-kernel,
	Sergey Senozhatsky, Jerome Marchand, juno.choi, seungho1.park,
	Luigi Semenzato, Nitin Gupta, David Horner

On Thu, Aug 21, 2014 at 02:53:57PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > zs_get_total_size_bytes returns a amount of memory zsmalloc
> > consumed with *byte unit* but zsmalloc operates *page unit*
> > rather than byte unit so let's change the API so benefit
> > we could get is that reduce unnecessary overhead
> > (ie, change page unit with byte unit) in zsmalloc.
> >
> > Now, zswap can rollback to zswap_pool_pages.
> > Over to zswap guys ;-)
> 
> We could change zpool/zswap over to total pages instead of total
> bytes, since both zbud and zsmalloc now report size in pages.  The
> only downside would be if either changed later to not use only whole
> pages (or if they start using huge pages for storage...), but for what
> they do that seems unlikely.  After this patch is finalized I can
> write up a quick patch unless Seth disagrees (or already has a patch
> :)

I agree that we should move everything (back) to pages.

I can write the patch or you can; doesn't matter to me.  I might have
one started on the previous version of this patchset where I erroneously
determined that Minchan had broken stuff :-/

Seth

> 
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  drivers/block/zram/zram_drv.c |  4 ++--
> >  include/linux/zsmalloc.h      |  2 +-
> >  mm/zsmalloc.c                 | 10 +++++-----
> >  3 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index d00831c3d731..302dd37bcea3 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
> >
> >         down_read(&zram->init_lock);
> >         if (init_done(zram))
> > -               val = zs_get_total_size_bytes(meta->mem_pool);
> > +               val = zs_get_total_size(meta->mem_pool);
> >         up_read(&zram->init_lock);
> >
> > -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >  }
> >
> >  static ssize_t max_comp_streams_show(struct device *dev,
> > diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > index e44d634e7fb7..105b56e45d23 100644
> > --- a/include/linux/zsmalloc.h
> > +++ b/include/linux/zsmalloc.h
> > @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >                         enum zs_mapmode mm);
> >  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > +unsigned long zs_get_total_size(struct zs_pool *pool);
> 
> minor naming suggestion, but since the name is changing anyway,
> "zs_get_total_size" implies to me the units are bytes, would
> "zs_get_total_pages" be clearer that it's returning size in # of
> pages, not bytes?
> 
> >
> >  #endif
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index a65924255763..80408a1da03a 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -299,7 +299,7 @@ static void zs_zpool_unmap(void *pool, unsigned long handle)
> >
> >  static u64 zs_zpool_total_size(void *pool)
> >  {
> > -       return zs_get_total_size_bytes(pool);
> > +       return zs_get_total_size(pool) << PAGE_SHIFT;
> >  }
> >
> >  static struct zpool_driver zs_zpool_driver = {
> > @@ -1186,16 +1186,16 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> >  }
> >  EXPORT_SYMBOL_GPL(zs_unmap_object);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool)
> > +unsigned long zs_get_total_size(struct zs_pool *pool)
> >  {
> > -       u64 npages;
> > +       unsigned long npages;
> >
> >         spin_lock(&pool->stat_lock);
> >         npages = pool->pages_allocated;
> >         spin_unlock(&pool->stat_lock);
> > -       return npages << PAGE_SHIFT;
> > +       return npages;
> >  }
> > -EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
> > +EXPORT_SYMBOL_GPL(zs_get_total_size);
> >
> >  module_init(zs_init);
> >  module_exit(zs_exit);
> > --
> > 2.0.0
> >

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
  2014-08-21 18:53     ` Dan Streetman
@ 2014-08-21 23:23       ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21 23:23 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Thu, Aug 21, 2014 at 02:53:57PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > zs_get_total_size_bytes returns a amount of memory zsmalloc
> > consumed with *byte unit* but zsmalloc operates *page unit*
> > rather than byte unit so let's change the API so benefit
> > we could get is that reduce unnecessary overhead
> > (ie, change page unit with byte unit) in zsmalloc.
> >
> > Now, zswap can rollback to zswap_pool_pages.
> > Over to zswap guys ;-)
> 
> We could change zpool/zswap over to total pages instead of total
> bytes, since both zbud and zsmalloc now report size in pages.  The
> only downside would be if either changed later to not use only whole
> pages (or if they start using huge pages for storage...), but for what
> they do that seems unlikely.  After this patch is finalized I can
> write up a quick patch unless Seth disagrees (or already has a patch
> :)
> 
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  drivers/block/zram/zram_drv.c |  4 ++--
> >  include/linux/zsmalloc.h      |  2 +-
> >  mm/zsmalloc.c                 | 10 +++++-----
> >  3 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index d00831c3d731..302dd37bcea3 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
> >
> >         down_read(&zram->init_lock);
> >         if (init_done(zram))
> > -               val = zs_get_total_size_bytes(meta->mem_pool);
> > +               val = zs_get_total_size(meta->mem_pool);
> >         up_read(&zram->init_lock);
> >
> > -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >  }
> >
> >  static ssize_t max_comp_streams_show(struct device *dev,
> > diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > index e44d634e7fb7..105b56e45d23 100644
> > --- a/include/linux/zsmalloc.h
> > +++ b/include/linux/zsmalloc.h
> > @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >                         enum zs_mapmode mm);
> >  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > +unsigned long zs_get_total_size(struct zs_pool *pool);
> 
> minor naming suggestion, but since the name is changing anyway,
> "zs_get_total_size" implies to me the units are bytes, would
> "zs_get_total_pages" be clearer that it's returning size in # of
> pages, not bytes?

It's better. Will change.
Thanks!

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes
@ 2014-08-21 23:23       ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21 23:23 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Thu, Aug 21, 2014 at 02:53:57PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > zs_get_total_size_bytes returns a amount of memory zsmalloc
> > consumed with *byte unit* but zsmalloc operates *page unit*
> > rather than byte unit so let's change the API so benefit
> > we could get is that reduce unnecessary overhead
> > (ie, change page unit with byte unit) in zsmalloc.
> >
> > Now, zswap can rollback to zswap_pool_pages.
> > Over to zswap guys ;-)
> 
> We could change zpool/zswap over to total pages instead of total
> bytes, since both zbud and zsmalloc now report size in pages.  The
> only downside would be if either changed later to not use only whole
> pages (or if they start using huge pages for storage...), but for what
> they do that seems unlikely.  After this patch is finalized I can
> write up a quick patch unless Seth disagrees (or already has a patch
> :)
> 
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  drivers/block/zram/zram_drv.c |  4 ++--
> >  include/linux/zsmalloc.h      |  2 +-
> >  mm/zsmalloc.c                 | 10 +++++-----
> >  3 files changed, 8 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index d00831c3d731..302dd37bcea3 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -103,10 +103,10 @@ static ssize_t mem_used_total_show(struct device *dev,
> >
> >         down_read(&zram->init_lock);
> >         if (init_done(zram))
> > -               val = zs_get_total_size_bytes(meta->mem_pool);
> > +               val = zs_get_total_size(meta->mem_pool);
> >         up_read(&zram->init_lock);
> >
> > -       return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> >  }
> >
> >  static ssize_t max_comp_streams_show(struct device *dev,
> > diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> > index e44d634e7fb7..105b56e45d23 100644
> > --- a/include/linux/zsmalloc.h
> > +++ b/include/linux/zsmalloc.h
> > @@ -46,6 +46,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >                         enum zs_mapmode mm);
> >  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
> >
> > -u64 zs_get_total_size_bytes(struct zs_pool *pool);
> > +unsigned long zs_get_total_size(struct zs_pool *pool);
> 
> minor naming suggestion, but since the name is changing anyway,
> "zs_get_total_size" implies to me the units are bytes, would
> "zs_get_total_pages" be clearer that it's returning size in # of
> pages, not bytes?

It's better. Will change.
Thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 3/4] zram: zram memory size limitation
  2014-08-21 19:08     ` Dan Streetman
@ 2014-08-21 23:36       ` Minchan Kim
  -1 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21 23:36 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Thu, Aug 21, 2014 at 03:08:12PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > Since zram has no control feature to limit memory usage,
> > it makes hard to manage system memrory.
> >
> > This patch adds new knob "mem_limit" via sysfs to set up the
> > a limit so that zram could fail allocation once it reaches
> > the limit.
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  Documentation/ABI/testing/sysfs-block-zram |  9 +++++++
> >  Documentation/blockdev/zram.txt            | 20 ++++++++++++---
> >  drivers/block/zram/zram_drv.c              | 41 ++++++++++++++++++++++++++++++
> >  drivers/block/zram/zram_drv.h              |  5 ++++
> >  4 files changed, 71 insertions(+), 4 deletions(-)
> >
> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> > index 70ec992514d0..025331c19045 100644
> > --- a/Documentation/ABI/testing/sysfs-block-zram
> > +++ b/Documentation/ABI/testing/sysfs-block-zram
> > @@ -119,3 +119,12 @@ Description:
> >                 efficiency can be calculated using compr_data_size and this
> >                 statistic.
> >                 Unit: bytes
> > +
> > +What:          /sys/block/zram<id>/mem_limit
> > +Date:          August 2014
> > +Contact:       Minchan Kim <minchan@kernel.org>
> > +Description:
> > +               The mem_limit file is read/write and specifies the amount
> > +               of memory to be able to consume memory to store store
> > +               compressed data.
> 
> might want to clarify here that the value "0", which is the default,
> disables the limit.

Okay.

> 
> > +               Unit: bytes
> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> > index 0595c3f56ccf..9f239ff8c444 100644
> > --- a/Documentation/blockdev/zram.txt
> > +++ b/Documentation/blockdev/zram.txt
> > @@ -74,14 +74,26 @@ There is little point creating a zram of greater than twice the size of memory
> >  since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
> >  size of the disk when not in use so a huge zram is wasteful.
> >
> > -5) Activate:
> > +5) Set memory limit: Optional
> > +       Set memory limit by writing the value to sysfs node 'mem_limit'.
> > +       The value can be either in bytes or you can use mem suffixes.
> > +       Examples:
> > +           # limit /dev/zram0 with 50MB memory
> > +           echo $((50*1024*1024)) > /sys/block/zram0/mem_limit
> > +
> > +           # Using mem suffixes
> > +           echo 256K > /sys/block/zram0/mem_limit
> > +           echo 512M > /sys/block/zram0/mem_limit
> > +           echo 1G > /sys/block/zram0/mem_limit
> 
> # To disable memory limit
> echo 0 > /sys/block/zram0/mem_limit

Yeb.

> 
> > +
> > +6) Activate:
> >         mkswap /dev/zram0
> >         swapon /dev/zram0
> >
> >         mkfs.ext4 /dev/zram1
> >         mount /dev/zram1 /tmp
> >
> > -6) Stats:
> > +7) Stats:
> >         Per-device statistics are exported as various nodes under
> >         /sys/block/zram<id>/
> >                 disksize
> > @@ -96,11 +108,11 @@ size of the disk when not in use so a huge zram is wasteful.
> >                 compr_data_size
> >                 mem_used_total
> >
> > -7) Deactivate:
> > +8) Deactivate:
> >         swapoff /dev/zram0
> >         umount /dev/zram1
> >
> > -8) Reset:
> > +9) Reset:
> >         Write any positive value to 'reset' sysfs node
> >         echo 1 > /sys/block/zram0/reset
> >         echo 1 > /sys/block/zram1/reset
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index 302dd37bcea3..adc91c7ecaef 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -122,6 +122,33 @@ static ssize_t max_comp_streams_show(struct device *dev,
> >         return scnprintf(buf, PAGE_SIZE, "%d\n", val);
> >  }
> >
> > +static ssize_t mem_limit_show(struct device *dev,
> > +               struct device_attribute *attr, char *buf)
> > +{
> > +       u64 val;
> > +       struct zram *zram = dev_to_zram(dev);
> > +
> > +       down_read(&zram->init_lock);
> > +       val = zram->limit_pages;
> > +       up_read(&zram->init_lock);
> > +
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> > +}
> > +
> > +static ssize_t mem_limit_store(struct device *dev,
> > +               struct device_attribute *attr, const char *buf, size_t len)
> > +{
> > +       u64 limit;
> > +       struct zram *zram = dev_to_zram(dev);
> > +
> > +       limit = memparse(buf, NULL);
> > +       down_write(&zram->init_lock);
> > +       zram->limit_pages = PAGE_ALIGN(limit) >> PAGE_SHIFT;
> > +       up_write(&zram->init_lock);
> > +
> > +       return len;
> > +}
> > +
> >  static ssize_t max_comp_streams_store(struct device *dev,
> >                 struct device_attribute *attr, const char *buf, size_t len)
> >  {
> > @@ -513,6 +540,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >                 ret = -ENOMEM;
> >                 goto out;
> >         }
> > +
> > +       if (zram->limit_pages &&
> > +               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> > +               zs_free(meta->mem_pool, handle);
> > +               ret = -ENOMEM;
> > +               goto out;
> > +       }
> > +
> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
> >
> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> > @@ -617,6 +652,9 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity)
> >         struct zram_meta *meta;
> >
> >         down_write(&zram->init_lock);
> > +
> > +       zram->limit_pages = 0;
> > +
> >         if (!init_done(zram)) {
> >                 up_write(&zram->init_lock);
> >                 return;
> > @@ -857,6 +895,8 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
> >  static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
> >  static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> > +static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> > +               mem_limit_store);
> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
> >                 max_comp_streams_show, max_comp_streams_store);
> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> > @@ -885,6 +925,7 @@ static struct attribute *zram_disk_attrs[] = {
> >         &dev_attr_orig_data_size.attr,
> >         &dev_attr_compr_data_size.attr,
> >         &dev_attr_mem_used_total.attr,
> > +       &dev_attr_mem_limit.attr,
> >         &dev_attr_max_comp_streams.attr,
> >         &dev_attr_comp_algorithm.attr,
> >         NULL,
> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> > index e0f725c87cc6..b7aa9c21553f 100644
> > --- a/drivers/block/zram/zram_drv.h
> > +++ b/drivers/block/zram/zram_drv.h
> > @@ -112,6 +112,11 @@ struct zram {
> >         u64 disksize;   /* bytes */
> >         int max_comp_streams;
> >         struct zram_stats stats;
> > +       /*
> > +        * the number of pages zram can consume for storing compressed data
> > +        */
> > +       unsigned long limit_pages;
> > +
> >         char compressor[10];
> >  };
> >  #endif
> > --
> > 2.0.0
> >

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v3 3/4] zram: zram memory size limitation
@ 2014-08-21 23:36       ` Minchan Kim
  0 siblings, 0 replies; 30+ messages in thread
From: Minchan Kim @ 2014-08-21 23:36 UTC (permalink / raw)
  To: Dan Streetman
  Cc: Andrew Morton, Linux-MM, linux-kernel, Sergey Senozhatsky,
	Jerome Marchand, juno.choi, seungho1.park, Luigi Semenzato,
	Nitin Gupta, Seth Jennings, David Horner

On Thu, Aug 21, 2014 at 03:08:12PM -0400, Dan Streetman wrote:
> On Wed, Aug 20, 2014 at 8:27 PM, Minchan Kim <minchan@kernel.org> wrote:
> > Since zram has no control feature to limit memory usage,
> > it makes hard to manage system memrory.
> >
> > This patch adds new knob "mem_limit" via sysfs to set up the
> > a limit so that zram could fail allocation once it reaches
> > the limit.
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  Documentation/ABI/testing/sysfs-block-zram |  9 +++++++
> >  Documentation/blockdev/zram.txt            | 20 ++++++++++++---
> >  drivers/block/zram/zram_drv.c              | 41 ++++++++++++++++++++++++++++++
> >  drivers/block/zram/zram_drv.h              |  5 ++++
> >  4 files changed, 71 insertions(+), 4 deletions(-)
> >
> > diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
> > index 70ec992514d0..025331c19045 100644
> > --- a/Documentation/ABI/testing/sysfs-block-zram
> > +++ b/Documentation/ABI/testing/sysfs-block-zram
> > @@ -119,3 +119,12 @@ Description:
> >                 efficiency can be calculated using compr_data_size and this
> >                 statistic.
> >                 Unit: bytes
> > +
> > +What:          /sys/block/zram<id>/mem_limit
> > +Date:          August 2014
> > +Contact:       Minchan Kim <minchan@kernel.org>
> > +Description:
> > +               The mem_limit file is read/write and specifies the amount
> > +               of memory to be able to consume memory to store store
> > +               compressed data.
> 
> might want to clarify here that the value "0", which is the default,
> disables the limit.

Okay.

> 
> > +               Unit: bytes
> > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> > index 0595c3f56ccf..9f239ff8c444 100644
> > --- a/Documentation/blockdev/zram.txt
> > +++ b/Documentation/blockdev/zram.txt
> > @@ -74,14 +74,26 @@ There is little point creating a zram of greater than twice the size of memory
> >  since we expect a 2:1 compression ratio. Note that zram uses about 0.1% of the
> >  size of the disk when not in use so a huge zram is wasteful.
> >
> > -5) Activate:
> > +5) Set memory limit: Optional
> > +       Set memory limit by writing the value to sysfs node 'mem_limit'.
> > +       The value can be either in bytes or you can use mem suffixes.
> > +       Examples:
> > +           # limit /dev/zram0 with 50MB memory
> > +           echo $((50*1024*1024)) > /sys/block/zram0/mem_limit
> > +
> > +           # Using mem suffixes
> > +           echo 256K > /sys/block/zram0/mem_limit
> > +           echo 512M > /sys/block/zram0/mem_limit
> > +           echo 1G > /sys/block/zram0/mem_limit
> 
> # To disable memory limit
> echo 0 > /sys/block/zram0/mem_limit

Yeb.

> 
> > +
> > +6) Activate:
> >         mkswap /dev/zram0
> >         swapon /dev/zram0
> >
> >         mkfs.ext4 /dev/zram1
> >         mount /dev/zram1 /tmp
> >
> > -6) Stats:
> > +7) Stats:
> >         Per-device statistics are exported as various nodes under
> >         /sys/block/zram<id>/
> >                 disksize
> > @@ -96,11 +108,11 @@ size of the disk when not in use so a huge zram is wasteful.
> >                 compr_data_size
> >                 mem_used_total
> >
> > -7) Deactivate:
> > +8) Deactivate:
> >         swapoff /dev/zram0
> >         umount /dev/zram1
> >
> > -8) Reset:
> > +9) Reset:
> >         Write any positive value to 'reset' sysfs node
> >         echo 1 > /sys/block/zram0/reset
> >         echo 1 > /sys/block/zram1/reset
> > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> > index 302dd37bcea3..adc91c7ecaef 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -122,6 +122,33 @@ static ssize_t max_comp_streams_show(struct device *dev,
> >         return scnprintf(buf, PAGE_SIZE, "%d\n", val);
> >  }
> >
> > +static ssize_t mem_limit_show(struct device *dev,
> > +               struct device_attribute *attr, char *buf)
> > +{
> > +       u64 val;
> > +       struct zram *zram = dev_to_zram(dev);
> > +
> > +       down_read(&zram->init_lock);
> > +       val = zram->limit_pages;
> > +       up_read(&zram->init_lock);
> > +
> > +       return scnprintf(buf, PAGE_SIZE, "%llu\n", val << PAGE_SHIFT);
> > +}
> > +
> > +static ssize_t mem_limit_store(struct device *dev,
> > +               struct device_attribute *attr, const char *buf, size_t len)
> > +{
> > +       u64 limit;
> > +       struct zram *zram = dev_to_zram(dev);
> > +
> > +       limit = memparse(buf, NULL);
> > +       down_write(&zram->init_lock);
> > +       zram->limit_pages = PAGE_ALIGN(limit) >> PAGE_SHIFT;
> > +       up_write(&zram->init_lock);
> > +
> > +       return len;
> > +}
> > +
> >  static ssize_t max_comp_streams_store(struct device *dev,
> >                 struct device_attribute *attr, const char *buf, size_t len)
> >  {
> > @@ -513,6 +540,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >                 ret = -ENOMEM;
> >                 goto out;
> >         }
> > +
> > +       if (zram->limit_pages &&
> > +               zs_get_total_size(meta->mem_pool) > zram->limit_pages) {
> > +               zs_free(meta->mem_pool, handle);
> > +               ret = -ENOMEM;
> > +               goto out;
> > +       }
> > +
> >         cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_WO);
> >
> >         if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> > @@ -617,6 +652,9 @@ static void zram_reset_device(struct zram *zram, bool reset_capacity)
> >         struct zram_meta *meta;
> >
> >         down_write(&zram->init_lock);
> > +
> > +       zram->limit_pages = 0;
> > +
> >         if (!init_done(zram)) {
> >                 up_write(&zram->init_lock);
> >                 return;
> > @@ -857,6 +895,8 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
> >  static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
> >  static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
> >  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> > +static DEVICE_ATTR(mem_limit, S_IRUGO | S_IWUSR, mem_limit_show,
> > +               mem_limit_store);
> >  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
> >                 max_comp_streams_show, max_comp_streams_store);
> >  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> > @@ -885,6 +925,7 @@ static struct attribute *zram_disk_attrs[] = {
> >         &dev_attr_orig_data_size.attr,
> >         &dev_attr_compr_data_size.attr,
> >         &dev_attr_mem_used_total.attr,
> > +       &dev_attr_mem_limit.attr,
> >         &dev_attr_max_comp_streams.attr,
> >         &dev_attr_comp_algorithm.attr,
> >         NULL,
> > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
> > index e0f725c87cc6..b7aa9c21553f 100644
> > --- a/drivers/block/zram/zram_drv.h
> > +++ b/drivers/block/zram/zram_drv.h
> > @@ -112,6 +112,11 @@ struct zram {
> >         u64 disksize;   /* bytes */
> >         int max_comp_streams;
> >         struct zram_stats stats;
> > +       /*
> > +        * the number of pages zram can consume for storing compressed data
> > +        */
> > +       unsigned long limit_pages;
> > +
> >         char compressor[10];
> >  };
> >  #endif
> > --
> > 2.0.0
> >

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2014-08-21 23:32 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-21  0:27 [PATCH v3 0/4] zram memory control enhance Minchan Kim
2014-08-21  0:27 ` Minchan Kim
2014-08-21  0:27 ` [PATCH v3 1/4] zsmalloc: move pages_allocated to zs_pool Minchan Kim
2014-08-21  0:27   ` Minchan Kim
2014-08-21  0:27 ` [PATCH v3 2/4] zsmalloc: change return value unit of zs_get_total_size_bytes Minchan Kim
2014-08-21  0:27   ` Minchan Kim
2014-08-21 18:53   ` Dan Streetman
2014-08-21 18:53     ` Dan Streetman
2014-08-21 23:22     ` Minchan Kim
2014-08-21 23:22       ` Minchan Kim
2014-08-21 23:22     ` Seth Jennings
2014-08-21 23:22       ` Seth Jennings
2014-08-21 23:23     ` Minchan Kim
2014-08-21 23:23       ` Minchan Kim
2014-08-21  0:27 ` [PATCH v3 3/4] zram: zram memory size limitation Minchan Kim
2014-08-21  0:27   ` Minchan Kim
2014-08-21 19:08   ` Dan Streetman
2014-08-21 19:08     ` Dan Streetman
2014-08-21 23:36     ` Minchan Kim
2014-08-21 23:36       ` Minchan Kim
2014-08-21  0:27 ` [PATCH v3 4/4] zram: report maximum used memory Minchan Kim
2014-08-21  0:27   ` Minchan Kim
2014-08-21  2:20   ` David Horner
2014-08-21  2:20     ` David Horner
2014-08-21  2:41     ` Minchan Kim
2014-08-21  2:41       ` Minchan Kim
2014-08-21  3:03       ` David Horner
2014-08-21  3:03         ` David Horner
2014-08-21  4:33         ` Minchan Kim
2014-08-21  4:33           ` Minchan Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.