All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram
@ 2014-08-08  2:47 David Horner
  0 siblings, 0 replies; 7+ messages in thread
From: David Horner @ 2014-08-08  2:47 UTC (permalink / raw)
  To: minchan; +Cc: linux-kernel-mm, linux-kernel

 [2/3]


 But why isn't mem_used_max writable? (save tearing down and rebuilding
 device to reset max)

 static DEVICE_ATTR(mem_used_max, S_IRUGO, mem_used_max_show, NULL);

 static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show, NULL);

   with a check in the store() that the new value is positive and less
than current max?


 I'm also a little puzzled why there is a new API zs_get_max_size_bytes if
 the data is accessible through sysfs?
 Especially if max limit will be (as you propose for [3/3]) through accessed
 through zsmalloc and hence zram needn't access.



  [3/3]
 I concur that the zram limit is best implemented in zsmalloc.
 I am looking forward to that revised code.


> From: Minchan Kim <minchan <at> kernel.org>
> Subject: [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in
> zram
> <http://news.gmane.org/find-root.php?message_id=1407225723%2d23754%2d3%2dgit%2dsend%2demail%2dminchan%40kernel.org>
> Newsgroups: gmane.linux.kernel.mm
> <http://news.gmane.org/gmane.linux.kernel.mm>, gmane.linux.kernel
> <http://news.gmane.org/gmane.linux.kernel>
> Date: 2014-08-05 08:02:02 GMT (5 hours and 4 minutes ago)
>
> Normally, zram user can get maximum memory zsmalloc consumed via
> polling mem_used_total with sysfs in userspace.
>
> But it has a critical problem because user can miss peak memory
> usage during update interval so that gap between them could be
> huge when memory pressure is really heavy.
>
> This patch adds new API zs_get_max_size_bytes in zsmalloc so
> user(ex, zram) doesn't need to poll in short interval to get
> exact value.
>
> User can just see max memory usage once his test workload is
> done. It's pretty handy and accurate.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram
  2014-08-05  8:02   ` Minchan Kim
@ 2014-08-13 15:25     ` Seth Jennings
  -1 siblings, 0 replies; 7+ messages in thread
From: Seth Jennings @ 2014-08-13 15:25 UTC (permalink / raw)
  To: Minchan Kim
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta

On Tue, Aug 05, 2014 at 05:02:02PM +0900, Minchan Kim wrote:
> Normally, zram user can get maximum memory zsmalloc consumed via
> polling mem_used_total with sysfs in userspace.
> 
> But it has a critical problem because user can miss peak memory
> usage during update interval so that gap between them could be
> huge when memory pressure is really heavy.
> 
> This patch adds new API zs_get_max_size_bytes in zsmalloc so
> user(ex, zram) doesn't need to poll in short interval to get
> exact value.
> 
> User can just see max memory usage once his test workload is
> done. It's pretty handy and accurate.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  Documentation/blockdev/zram.txt |  1 +
>  drivers/block/zram/zram_drv.c   | 17 +++++++++++++++++
>  include/linux/zsmalloc.h        |  1 +
>  mm/zsmalloc.c                   | 20 ++++++++++++++++++++
>  4 files changed, 39 insertions(+)
> 
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index 0595c3f56ccf..d24534bee763 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -95,6 +95,7 @@ size of the disk when not in use so a huge zram is wasteful.
>  		orig_data_size
>  		compr_data_size
>  		mem_used_total
> +		mem_used_max
>  
>  7) Deactivate:
>  	swapoff /dev/zram0
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 36e54be402df..a4d637b4db7d 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -109,6 +109,21 @@ static ssize_t mem_used_total_show(struct device *dev,
>  	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
>  }
>  
> +static ssize_t mem_used_max_show(struct device *dev,
> +		struct device_attribute *attr, char *buf)
> +{
> +	u64 val = 0;
> +	struct zram *zram = dev_to_zram(dev);
> +	struct zram_meta *meta = zram->meta;
> +
> +	down_read(&zram->init_lock);
> +	if (init_done(zram))
> +		val = zs_get_max_size_bytes(meta->mem_pool);
> +	up_read(&zram->init_lock);
> +
> +	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> +}
> +
>  static ssize_t max_comp_streams_show(struct device *dev,
>  		struct device_attribute *attr, char *buf)
>  {
> @@ -838,6 +853,7 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
>  static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
>  static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> +static DEVICE_ATTR(mem_used_max, S_IRUGO, mem_used_max_show, NULL);
>  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>  		max_comp_streams_show, max_comp_streams_store);
>  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> @@ -866,6 +882,7 @@ static struct attribute *zram_disk_attrs[] = {
>  	&dev_attr_orig_data_size.attr,
>  	&dev_attr_compr_data_size.attr,
>  	&dev_attr_mem_used_total.attr,
> +	&dev_attr_mem_used_max.attr,
>  	&dev_attr_max_comp_streams.attr,
>  	&dev_attr_comp_algorithm.attr,
>  	NULL,
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index e44d634e7fb7..fb087ca06a88 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -47,5 +47,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>  
>  u64 zs_get_total_size_bytes(struct zs_pool *pool);
> +u64 zs_get_max_size_bytes(struct zs_pool *pool);
>  
>  #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index a6089bd26621..3b5be076268a 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -219,6 +219,7 @@ struct zs_pool {
>  
>  	gfp_t flags;	/* allocation flags used when growing pool */
>  	unsigned long pages_allocated;
> +	unsigned long max_pages_allocated;

Same here with atomic.

Seth

>  };
>  
>  /*
> @@ -946,6 +947,8 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
>  		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
>  		spin_lock(&pool->stat_lock);
>  		pool->pages_allocated += class->pages_per_zspage;
> +		if (pool->max_pages_allocated < pool->pages_allocated)
> +			pool->max_pages_allocated = pool->pages_allocated;
>  		spin_unlock(&pool->stat_lock);
>  		spin_lock(&class->lock);
>  	}
> @@ -1101,6 +1104,9 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
>  }
>  EXPORT_SYMBOL_GPL(zs_unmap_object);
>  
> +/*
> + * Reports current memory usage consumed by zs_malloc
> + */
>  u64 zs_get_total_size_bytes(struct zs_pool *pool)
>  {
>  	u64 npages;
> @@ -1112,6 +1118,20 @@ u64 zs_get_total_size_bytes(struct zs_pool *pool)
>  }
>  EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
>  
> +/*
> + * Reports maximum memory usage zs_malloc have consumed
> + */
> +u64 zs_get_max_size_bytes(struct zs_pool *pool)
> +{
> +	u64 npages;
> +
> +	spin_lock(&pool->stat_lock);
> +	npages = pool->max_pages_allocated;
> +	spin_unlock(&pool->stat_lock);
> +	return npages << PAGE_SHIFT;
> +}
> +EXPORT_SYMBOL_GPL(zs_get_max_size_bytes);
> +
>  module_init(zs_init);
>  module_exit(zs_exit);
>  
> -- 
> 2.0.0
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram
@ 2014-08-13 15:25     ` Seth Jennings
  0 siblings, 0 replies; 7+ messages in thread
From: Seth Jennings @ 2014-08-13 15:25 UTC (permalink / raw)
  To: Minchan Kim
  Cc: linux-mm, linux-kernel, Sergey Senozhatsky, Jerome Marchand,
	juno.choi, seungho1.park, Luigi Semenzato, Nitin Gupta

On Tue, Aug 05, 2014 at 05:02:02PM +0900, Minchan Kim wrote:
> Normally, zram user can get maximum memory zsmalloc consumed via
> polling mem_used_total with sysfs in userspace.
> 
> But it has a critical problem because user can miss peak memory
> usage during update interval so that gap between them could be
> huge when memory pressure is really heavy.
> 
> This patch adds new API zs_get_max_size_bytes in zsmalloc so
> user(ex, zram) doesn't need to poll in short interval to get
> exact value.
> 
> User can just see max memory usage once his test workload is
> done. It's pretty handy and accurate.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  Documentation/blockdev/zram.txt |  1 +
>  drivers/block/zram/zram_drv.c   | 17 +++++++++++++++++
>  include/linux/zsmalloc.h        |  1 +
>  mm/zsmalloc.c                   | 20 ++++++++++++++++++++
>  4 files changed, 39 insertions(+)
> 
> diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
> index 0595c3f56ccf..d24534bee763 100644
> --- a/Documentation/blockdev/zram.txt
> +++ b/Documentation/blockdev/zram.txt
> @@ -95,6 +95,7 @@ size of the disk when not in use so a huge zram is wasteful.
>  		orig_data_size
>  		compr_data_size
>  		mem_used_total
> +		mem_used_max
>  
>  7) Deactivate:
>  	swapoff /dev/zram0
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 36e54be402df..a4d637b4db7d 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -109,6 +109,21 @@ static ssize_t mem_used_total_show(struct device *dev,
>  	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
>  }
>  
> +static ssize_t mem_used_max_show(struct device *dev,
> +		struct device_attribute *attr, char *buf)
> +{
> +	u64 val = 0;
> +	struct zram *zram = dev_to_zram(dev);
> +	struct zram_meta *meta = zram->meta;
> +
> +	down_read(&zram->init_lock);
> +	if (init_done(zram))
> +		val = zs_get_max_size_bytes(meta->mem_pool);
> +	up_read(&zram->init_lock);
> +
> +	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
> +}
> +
>  static ssize_t max_comp_streams_show(struct device *dev,
>  		struct device_attribute *attr, char *buf)
>  {
> @@ -838,6 +853,7 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
>  static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
>  static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
>  static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
> +static DEVICE_ATTR(mem_used_max, S_IRUGO, mem_used_max_show, NULL);
>  static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
>  		max_comp_streams_show, max_comp_streams_store);
>  static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
> @@ -866,6 +882,7 @@ static struct attribute *zram_disk_attrs[] = {
>  	&dev_attr_orig_data_size.attr,
>  	&dev_attr_compr_data_size.attr,
>  	&dev_attr_mem_used_total.attr,
> +	&dev_attr_mem_used_max.attr,
>  	&dev_attr_max_comp_streams.attr,
>  	&dev_attr_comp_algorithm.attr,
>  	NULL,
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index e44d634e7fb7..fb087ca06a88 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -47,5 +47,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
>  void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
>  
>  u64 zs_get_total_size_bytes(struct zs_pool *pool);
> +u64 zs_get_max_size_bytes(struct zs_pool *pool);
>  
>  #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index a6089bd26621..3b5be076268a 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -219,6 +219,7 @@ struct zs_pool {
>  
>  	gfp_t flags;	/* allocation flags used when growing pool */
>  	unsigned long pages_allocated;
> +	unsigned long max_pages_allocated;

Same here with atomic.

Seth

>  };
>  
>  /*
> @@ -946,6 +947,8 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
>  		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
>  		spin_lock(&pool->stat_lock);
>  		pool->pages_allocated += class->pages_per_zspage;
> +		if (pool->max_pages_allocated < pool->pages_allocated)
> +			pool->max_pages_allocated = pool->pages_allocated;
>  		spin_unlock(&pool->stat_lock);
>  		spin_lock(&class->lock);
>  	}
> @@ -1101,6 +1104,9 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
>  }
>  EXPORT_SYMBOL_GPL(zs_unmap_object);
>  
> +/*
> + * Reports current memory usage consumed by zs_malloc
> + */
>  u64 zs_get_total_size_bytes(struct zs_pool *pool)
>  {
>  	u64 npages;
> @@ -1112,6 +1118,20 @@ u64 zs_get_total_size_bytes(struct zs_pool *pool)
>  }
>  EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
>  
> +/*
> + * Reports maximum memory usage zs_malloc have consumed
> + */
> +u64 zs_get_max_size_bytes(struct zs_pool *pool)
> +{
> +	u64 npages;
> +
> +	spin_lock(&pool->stat_lock);
> +	npages = pool->max_pages_allocated;
> +	spin_unlock(&pool->stat_lock);
> +	return npages << PAGE_SHIFT;
> +}
> +EXPORT_SYMBOL_GPL(zs_get_max_size_bytes);
> +
>  module_init(zs_init);
>  module_exit(zs_exit);
>  
> -- 
> 2.0.0
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram
  2014-08-08  2:56 David Horner
@ 2014-08-12  7:18 ` Minchan Kim
  0 siblings, 0 replies; 7+ messages in thread
From: Minchan Kim @ 2014-08-12  7:18 UTC (permalink / raw)
  To: David Horner; +Cc: linux-mm

Hello,

Sorry for the late response. I was on vacation and then was busy.

On Fri, Aug 08, 2014 at 02:56:24AM +0000, David Horner wrote:
> 
>  [2/3]
> 
> 
>  But why isn't mem_used_max writable? (save tearing down and rebuilding
>  device to reset max)

I don't know what you mean but I will make it writable so user can
reset it to zero when they want.

> 
>  static DEVICE_ATTR(mem_used_max, S_IRUGO, mem_used_max_show, NULL);
> 
>  static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show, NULL);
> 
>    with a check in the store() that the new value is positive and less
> than current max?
> 
> 
>  I'm also a little puzzled why there is a new API zs_get_max_size_bytes if
>  the data is accessible through sysfs?
>  Especially if max limit will be (as you propose for [3/3]) through accessed
>  through zsmalloc and hence zram needn't access.

I don't know why you meant.
Anyway, I will resend revised version and Cc you.
Please, comment on that. :)

> 
> 
> 
>   [3/3]
>  I concur that the zram limit is best implemented in zsmalloc.
>  I am looking forward to that revised code.

Thanks!

> 
> 
> 
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram
@ 2014-08-08  2:56 David Horner
  2014-08-12  7:18 ` Minchan Kim
  0 siblings, 1 reply; 7+ messages in thread
From: David Horner @ 2014-08-08  2:56 UTC (permalink / raw)
  To: linux-mm


 [2/3]


 But why isn't mem_used_max writable? (save tearing down and rebuilding
 device to reset max)

 static DEVICE_ATTR(mem_used_max, S_IRUGO, mem_used_max_show, NULL);

 static DEVICE_ATTR(mem_used_max, S_IRUGO | S_IWUSR, mem_used_max_show, NULL);

   with a check in the store() that the new value is positive and less
than current max?


 I'm also a little puzzled why there is a new API zs_get_max_size_bytes if
 the data is accessible through sysfs?
 Especially if max limit will be (as you propose for [3/3]) through accessed
 through zsmalloc and hence zram needn't access.



  [3/3]
 I concur that the zram limit is best implemented in zsmalloc.
 I am looking forward to that revised code.




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram
  2014-08-05  8:02 [RFC 0/3] zram memory control enhance Minchan Kim
@ 2014-08-05  8:02   ` Minchan Kim
  0 siblings, 0 replies; 7+ messages in thread
From: Minchan Kim @ 2014-08-05  8:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Sergey Senozhatsky, Jerome Marchand, juno.choi,
	seungho1.park, Luigi Semenzato, Nitin Gupta, Minchan Kim

Normally, zram user can get maximum memory zsmalloc consumed via
polling mem_used_total with sysfs in userspace.

But it has a critical problem because user can miss peak memory
usage during update interval so that gap between them could be
huge when memory pressure is really heavy.

This patch adds new API zs_get_max_size_bytes in zsmalloc so
user(ex, zram) doesn't need to poll in short interval to get
exact value.

User can just see max memory usage once his test workload is
done. It's pretty handy and accurate.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 Documentation/blockdev/zram.txt |  1 +
 drivers/block/zram/zram_drv.c   | 17 +++++++++++++++++
 include/linux/zsmalloc.h        |  1 +
 mm/zsmalloc.c                   | 20 ++++++++++++++++++++
 4 files changed, 39 insertions(+)

diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 0595c3f56ccf..d24534bee763 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -95,6 +95,7 @@ size of the disk when not in use so a huge zram is wasteful.
 		orig_data_size
 		compr_data_size
 		mem_used_total
+		mem_used_max
 
 7) Deactivate:
 	swapoff /dev/zram0
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 36e54be402df..a4d637b4db7d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -109,6 +109,21 @@ static ssize_t mem_used_total_show(struct device *dev,
 	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
 }
 
+static ssize_t mem_used_max_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	u64 val = 0;
+	struct zram *zram = dev_to_zram(dev);
+	struct zram_meta *meta = zram->meta;
+
+	down_read(&zram->init_lock);
+	if (init_done(zram))
+		val = zs_get_max_size_bytes(meta->mem_pool);
+	up_read(&zram->init_lock);
+
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
+}
+
 static ssize_t max_comp_streams_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
 {
@@ -838,6 +853,7 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
 static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
 static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
 static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
+static DEVICE_ATTR(mem_used_max, S_IRUGO, mem_used_max_show, NULL);
 static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
 		max_comp_streams_show, max_comp_streams_store);
 static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
@@ -866,6 +882,7 @@ static struct attribute *zram_disk_attrs[] = {
 	&dev_attr_orig_data_size.attr,
 	&dev_attr_compr_data_size.attr,
 	&dev_attr_mem_used_total.attr,
+	&dev_attr_mem_used_max.attr,
 	&dev_attr_max_comp_streams.attr,
 	&dev_attr_comp_algorithm.attr,
 	NULL,
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index e44d634e7fb7..fb087ca06a88 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -47,5 +47,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
 
 u64 zs_get_total_size_bytes(struct zs_pool *pool);
+u64 zs_get_max_size_bytes(struct zs_pool *pool);
 
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index a6089bd26621..3b5be076268a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -219,6 +219,7 @@ struct zs_pool {
 
 	gfp_t flags;	/* allocation flags used when growing pool */
 	unsigned long pages_allocated;
+	unsigned long max_pages_allocated;
 };
 
 /*
@@ -946,6 +947,8 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
 		spin_lock(&pool->stat_lock);
 		pool->pages_allocated += class->pages_per_zspage;
+		if (pool->max_pages_allocated < pool->pages_allocated)
+			pool->max_pages_allocated = pool->pages_allocated;
 		spin_unlock(&pool->stat_lock);
 		spin_lock(&class->lock);
 	}
@@ -1101,6 +1104,9 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
+/*
+ * Reports current memory usage consumed by zs_malloc
+ */
 u64 zs_get_total_size_bytes(struct zs_pool *pool)
 {
 	u64 npages;
@@ -1112,6 +1118,20 @@ u64 zs_get_total_size_bytes(struct zs_pool *pool)
 }
 EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
 
+/*
+ * Reports maximum memory usage zs_malloc have consumed
+ */
+u64 zs_get_max_size_bytes(struct zs_pool *pool)
+{
+	u64 npages;
+
+	spin_lock(&pool->stat_lock);
+	npages = pool->max_pages_allocated;
+	spin_unlock(&pool->stat_lock);
+	return npages << PAGE_SHIFT;
+}
+EXPORT_SYMBOL_GPL(zs_get_max_size_bytes);
+
 module_init(zs_init);
 module_exit(zs_exit);
 
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram
@ 2014-08-05  8:02   ` Minchan Kim
  0 siblings, 0 replies; 7+ messages in thread
From: Minchan Kim @ 2014-08-05  8:02 UTC (permalink / raw)
  To: linux-mm
  Cc: linux-kernel, Sergey Senozhatsky, Jerome Marchand, juno.choi,
	seungho1.park, Luigi Semenzato, Nitin Gupta, Minchan Kim

Normally, zram user can get maximum memory zsmalloc consumed via
polling mem_used_total with sysfs in userspace.

But it has a critical problem because user can miss peak memory
usage during update interval so that gap between them could be
huge when memory pressure is really heavy.

This patch adds new API zs_get_max_size_bytes in zsmalloc so
user(ex, zram) doesn't need to poll in short interval to get
exact value.

User can just see max memory usage once his test workload is
done. It's pretty handy and accurate.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 Documentation/blockdev/zram.txt |  1 +
 drivers/block/zram/zram_drv.c   | 17 +++++++++++++++++
 include/linux/zsmalloc.h        |  1 +
 mm/zsmalloc.c                   | 20 ++++++++++++++++++++
 4 files changed, 39 insertions(+)

diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 0595c3f56ccf..d24534bee763 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -95,6 +95,7 @@ size of the disk when not in use so a huge zram is wasteful.
 		orig_data_size
 		compr_data_size
 		mem_used_total
+		mem_used_max
 
 7) Deactivate:
 	swapoff /dev/zram0
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 36e54be402df..a4d637b4db7d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -109,6 +109,21 @@ static ssize_t mem_used_total_show(struct device *dev,
 	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
 }
 
+static ssize_t mem_used_max_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	u64 val = 0;
+	struct zram *zram = dev_to_zram(dev);
+	struct zram_meta *meta = zram->meta;
+
+	down_read(&zram->init_lock);
+	if (init_done(zram))
+		val = zs_get_max_size_bytes(meta->mem_pool);
+	up_read(&zram->init_lock);
+
+	return scnprintf(buf, PAGE_SIZE, "%llu\n", val);
+}
+
 static ssize_t max_comp_streams_show(struct device *dev,
 		struct device_attribute *attr, char *buf)
 {
@@ -838,6 +853,7 @@ static DEVICE_ATTR(initstate, S_IRUGO, initstate_show, NULL);
 static DEVICE_ATTR(reset, S_IWUSR, NULL, reset_store);
 static DEVICE_ATTR(orig_data_size, S_IRUGO, orig_data_size_show, NULL);
 static DEVICE_ATTR(mem_used_total, S_IRUGO, mem_used_total_show, NULL);
+static DEVICE_ATTR(mem_used_max, S_IRUGO, mem_used_max_show, NULL);
 static DEVICE_ATTR(max_comp_streams, S_IRUGO | S_IWUSR,
 		max_comp_streams_show, max_comp_streams_store);
 static DEVICE_ATTR(comp_algorithm, S_IRUGO | S_IWUSR,
@@ -866,6 +882,7 @@ static struct attribute *zram_disk_attrs[] = {
 	&dev_attr_orig_data_size.attr,
 	&dev_attr_compr_data_size.attr,
 	&dev_attr_mem_used_total.attr,
+	&dev_attr_mem_used_max.attr,
 	&dev_attr_max_comp_streams.attr,
 	&dev_attr_comp_algorithm.attr,
 	NULL,
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index e44d634e7fb7..fb087ca06a88 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -47,5 +47,6 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
 void zs_unmap_object(struct zs_pool *pool, unsigned long handle);
 
 u64 zs_get_total_size_bytes(struct zs_pool *pool);
+u64 zs_get_max_size_bytes(struct zs_pool *pool);
 
 #endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index a6089bd26621..3b5be076268a 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -219,6 +219,7 @@ struct zs_pool {
 
 	gfp_t flags;	/* allocation flags used when growing pool */
 	unsigned long pages_allocated;
+	unsigned long max_pages_allocated;
 };
 
 /*
@@ -946,6 +947,8 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
 		set_zspage_mapping(first_page, class->index, ZS_EMPTY);
 		spin_lock(&pool->stat_lock);
 		pool->pages_allocated += class->pages_per_zspage;
+		if (pool->max_pages_allocated < pool->pages_allocated)
+			pool->max_pages_allocated = pool->pages_allocated;
 		spin_unlock(&pool->stat_lock);
 		spin_lock(&class->lock);
 	}
@@ -1101,6 +1104,9 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
 }
 EXPORT_SYMBOL_GPL(zs_unmap_object);
 
+/*
+ * Reports current memory usage consumed by zs_malloc
+ */
 u64 zs_get_total_size_bytes(struct zs_pool *pool)
 {
 	u64 npages;
@@ -1112,6 +1118,20 @@ u64 zs_get_total_size_bytes(struct zs_pool *pool)
 }
 EXPORT_SYMBOL_GPL(zs_get_total_size_bytes);
 
+/*
+ * Reports maximum memory usage zs_malloc have consumed
+ */
+u64 zs_get_max_size_bytes(struct zs_pool *pool)
+{
+	u64 npages;
+
+	spin_lock(&pool->stat_lock);
+	npages = pool->max_pages_allocated;
+	spin_unlock(&pool->stat_lock);
+	return npages << PAGE_SHIFT;
+}
+EXPORT_SYMBOL_GPL(zs_get_max_size_bytes);
+
 module_init(zs_init);
 module_exit(zs_exit);
 
-- 
2.0.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-08-13 15:25 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-08  2:47 [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram David Horner
  -- strict thread matches above, loose matches on Subject: below --
2014-08-08  2:56 David Horner
2014-08-12  7:18 ` Minchan Kim
2014-08-05  8:02 [RFC 0/3] zram memory control enhance Minchan Kim
2014-08-05  8:02 ` [RFC 2/3] zsmalloc/zram: add zs_get_max_size_bytes and use it in zram Minchan Kim
2014-08-05  8:02   ` Minchan Kim
2014-08-13 15:25   ` Seth Jennings
2014-08-13 15:25     ` Seth Jennings

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.