linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently
@ 2013-03-15  2:34 Wanpeng Li
  2013-03-15  2:34 ` [PATCH v3 1/5] introduce zero filled pages handler Wanpeng Li
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Wanpeng Li @ 2013-03-15  2:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Andrew Morton
  Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
	Minchan Kim, linux-mm, linux-kernel, Wanpeng Li

Changelog:
 v2 -> v3:
  * increment/decrement zcache_[eph|pers]_zpages for zero-filled pages, spotted by Dan 
  * replace "zero" or "zero page" by "zero_filled_page", spotted by Dan
 v1 -> v2:
  * avoid changing tmem.[ch] entirely, spotted by Dan.
  * don't accumulate [eph|pers]pageframe and [eph|pers]zpages for 
    zero-filled pages, spotted by Dan
  * cleanup TODO list
  * add Dan Acked-by.

Motivation:

- Seth Jennings points out compress zero-filled pages with LZO(a lossless 
  data compression algorithm) will waste memory and result in fragmentation.
  https://lkml.org/lkml/2012/8/14/347
- Dan Magenheimer add "Support zero-filled pages more efficiently" feature 
  in zcache TODO list https://lkml.org/lkml/2013/2/13/503

Design:

- For store page, capture zero-filled pages(evicted clean page cache pages and 
  swap pages), but don't compress them, set pampd which store zpage address to
  0x2(since 0x0 and 0x1 has already been ocuppied) to mark special zero-filled
  case and take advantage of tmem infrastructure to transform handle-tuple(pool
  id, object id, and an index) to a pampd. Twice compress zero-filled pages will
  contribute to one zcache_[eph|pers]_pageframes count accumulated.
- For load page, traverse tmem hierachical to transform handle-tuple to pampd 
  and identify zero-filled case by pampd equal to 0x2 when filesystem reads
  file pages or a page needs to be swapped in, then refill the page to zero
  and return.

Test:

dd if=/dev/zero of=zerofile bs=1MB count=500
vmtouch -t zerofile
vmtouch -e zerofile

formula:
- fragmentation level = (zcache_[eph|pers]_pageframes * PAGE_SIZE - zcache_[eph|pers]_zbytes) 
  * 100 / (zcache_[eph|pers]_pageframes * PAGE_SIZE)
- memory zcache occupy = zcache_[eph|pers]_zbytes 

Result:

without zero-filled awareness:
- fragmentation level: 98%
- memory zcache occupy: 238MB
with zero-filled awareness:
- fragmentation level: 0%
- memory zcache occupy: 0MB

Wanpeng Li (5):
  introduce zero-filled pages handler
  zero-filled pages awareness
  handle zcache_[eph|pers]_zpages for zero-filled page
  introduce zero-filled page stat count
  clean TODO list

 drivers/staging/zcache/TODO          |    3 +-
 drivers/staging/zcache/zcache-main.c |  119 ++++++++++++++++++++++++++++++++--
 2 files changed, 114 insertions(+), 8 deletions(-)

-- 
1.7.7.6


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/5] introduce zero filled pages handler
  2013-03-15  2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
@ 2013-03-15  2:34 ` Wanpeng Li
  2013-03-15  2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Wanpeng Li @ 2013-03-15  2:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Andrew Morton
  Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
	Minchan Kim, linux-mm, linux-kernel, Wanpeng Li

Introduce zero-filled pages handler to capture and handle zero pages.

Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
 drivers/staging/zcache/zcache-main.c |   26 ++++++++++++++++++++++++++
 1 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index 328898e..d73dd4b 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -460,6 +460,32 @@ static void zcache_obj_free(struct tmem_obj *obj, struct tmem_pool *pool)
 	kmem_cache_free(zcache_obj_cache, obj);
 }
 
+static bool page_is_zero_filled(void *ptr)
+{
+	unsigned int pos;
+	unsigned long *page;
+
+	page = (unsigned long *)ptr;
+
+	for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++) {
+		if (page[pos])
+			return false;
+	}
+
+	return true;
+}
+
+static void handle_zero_filled_page(void *page)
+{
+	void *user_mem;
+
+	user_mem = kmap_atomic(page);
+	memset(user_mem, 0, PAGE_SIZE);
+	kunmap_atomic(user_mem);
+
+	flush_dcache_page(page);
+}
+
 static struct tmem_hostops zcache_hostops = {
 	.obj_alloc = zcache_obj_alloc,
 	.obj_free = zcache_obj_free,
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/5] zero-filled pages awareness
  2013-03-15  2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
  2013-03-15  2:34 ` [PATCH v3 1/5] introduce zero filled pages handler Wanpeng Li
@ 2013-03-15  2:34 ` Wanpeng Li
  2013-03-16 14:12   ` Bob Liu
  2013-03-19  0:50   ` Greg Kroah-Hartman
  2013-03-15  2:34 ` [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page Wanpeng Li
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 10+ messages in thread
From: Wanpeng Li @ 2013-03-15  2:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Andrew Morton
  Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
	Minchan Kim, linux-mm, linux-kernel, Wanpeng Li

Compression of zero-filled pages can unneccessarily cause internal
fragmentation, and thus waste memory. This special case can be
optimized.

This patch captures zero-filled pages, and marks their corresponding
zcache backing page entry as zero-filled. Whenever such zero-filled
page is retrieved, we fill the page frame with zero.

Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
 drivers/staging/zcache/zcache-main.c |   81 +++++++++++++++++++++++++++++++---
 1 files changed, 75 insertions(+), 6 deletions(-)

diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index d73dd4b..6c35c7d 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -59,6 +59,12 @@ static inline void frontswap_tmem_exclusive_gets(bool b)
 }
 #endif
 
+/*
+ * mark pampd to special value in order that later
+ * retrieve will identify zero-filled pages
+ */
+#define ZERO_FILLED 0x2
+
 /* enable (or fix code) when Seth's patches are accepted upstream */
 #define zcache_writeback_enabled 0
 
@@ -543,7 +549,23 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
 {
 	void *pampd = NULL, *cdata = data;
 	unsigned clen = size;
+	bool zero_filled = false;
 	struct page *page = (struct page *)(data), *newpage;
+	char *user_mem;
+
+	user_mem = kmap_atomic(page);
+
+	/*
+	 * Compressing zero-filled pages will waste memory and introduce
+	 * serious fragmentation, skip it to avoid overhead
+	 */
+	if (page_is_zero_filled(user_mem)) {
+		kunmap_atomic(user_mem);
+		clen = 0;
+		zero_filled = true;
+		goto got_pampd;
+	}
+	kunmap_atomic(user_mem);
 
 	if (!raw) {
 		zcache_compress(page, &cdata, &clen);
@@ -592,6 +614,8 @@ got_pampd:
 		zcache_eph_zpages_max = zcache_eph_zpages;
 	if (ramster_enabled && raw)
 		ramster_count_foreign_pages(true, 1);
+	if (zero_filled)
+		pampd = (void *)ZERO_FILLED;
 out:
 	return pampd;
 }
@@ -601,14 +625,31 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
 {
 	void *pampd = NULL, *cdata = data;
 	unsigned clen = size;
+	bool zero_filled = false;
 	struct page *page = (struct page *)(data), *newpage;
 	unsigned long zbud_mean_zsize;
 	unsigned long curr_pers_zpages, total_zsize;
+	char *user_mem;
 
 	if (data == NULL) {
 		BUG_ON(!ramster_enabled);
 		goto create_pampd;
 	}
+
+	user_mem = kmap_atomic(page);
+
+	/*
+	 * Compressing zero-filled pages will waste memory and introduce
+	 * serious fragmentation, skip it to avoid overhead
+	 */
+	if (page_is_zero_filled(page)) {
+		kunmap_atomic(user_mem);
+		clen = 0;
+		zero_filled = true;
+		goto got_pampd;
+	}
+	kunmap_atomic(user_mem);
+
 	curr_pers_zpages = zcache_pers_zpages;
 /* FIXME CONFIG_RAMSTER... subtract atomic remote_pers_pages here? */
 	if (!raw)
@@ -674,6 +715,8 @@ got_pampd:
 		zcache_pers_zbytes_max = zcache_pers_zbytes;
 	if (ramster_enabled && raw)
 		ramster_count_foreign_pages(false, 1);
+	if (zero_filled)
+		pampd = (void *)ZERO_FILLED;
 out:
 	return pampd;
 }
@@ -735,7 +778,8 @@ out:
  */
 void zcache_pampd_create_finish(void *pampd, bool eph)
 {
-	zbud_create_finish((struct zbudref *)pampd, eph);
+	if (pampd != (void *)ZERO_FILLED)
+		zbud_create_finish((struct zbudref *)pampd, eph);
 }
 
 /*
@@ -780,6 +824,14 @@ static int zcache_pampd_get_data(char *data, size_t *sizep, bool raw,
 	BUG_ON(preemptible());
 	BUG_ON(eph);	/* fix later if shared pools get implemented */
 	BUG_ON(pampd_is_remote(pampd));
+
+	if (pampd == (void *)ZERO_FILLED) {
+		handle_zero_filled_page(data);
+		if (!raw)
+			*sizep = PAGE_SIZE;
+		return 0;
+	}
+
 	if (raw)
 		ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
 						sizep, eph);
@@ -801,12 +853,21 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
 					struct tmem_oid *oid, uint32_t index)
 {
 	int ret;
-	bool eph = !is_persistent(pool);
+	bool eph = !is_persistent(pool), zero_filled = false;
 	struct page *page = NULL;
 	unsigned int zsize, zpages;
 
 	BUG_ON(preemptible());
 	BUG_ON(pampd_is_remote(pampd));
+
+	if (pampd == (void *)ZERO_FILLED) {
+		handle_zero_filled_page(data);
+		zero_filled = true;
+		if (!raw)
+			*sizep = PAGE_SIZE;
+		goto zero_fill;
+	}
+
 	if (raw)
 		ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
 						sizep, eph);
@@ -818,6 +879,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
 	}
 	page = zbud_free_and_delist((struct zbudref *)pampd, eph,
 					&zsize, &zpages);
+zero_fill:
 	if (eph) {
 		if (page)
 			zcache_eph_pageframes =
@@ -837,7 +899,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
 	}
 	if (!is_local_client(pool->client))
 		ramster_count_foreign_pages(eph, -1);
-	if (page)
+	if (page && !zero_filled)
 		zcache_free_page(page);
 	return ret;
 }
@@ -851,16 +913,23 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
 {
 	struct page *page = NULL;
 	unsigned int zsize, zpages;
+	bool zero_filled = false;
 
 	BUG_ON(preemptible());
-	if (pampd_is_remote(pampd)) {
+
+	if (pampd == (void *)ZERO_FILLED)
+		zero_filled = true;
+
+	if (pampd_is_remote(pampd) && !zero_filled) {
+
 		BUG_ON(!ramster_enabled);
 		pampd = ramster_pampd_free(pampd, pool, oid, index, acct);
 		if (pampd == NULL)
 			return;
 	}
 	if (is_ephemeral(pool)) {
-		page = zbud_free_and_delist((struct zbudref *)pampd,
+		if (!zero_filled)
+			page = zbud_free_and_delist((struct zbudref *)pampd,
 						true, &zsize, &zpages);
 		if (page)
 			zcache_eph_pageframes =
@@ -883,7 +952,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
 	}
 	if (!is_local_client(pool->client))
 		ramster_count_foreign_pages(is_ephemeral(pool), -1);
-	if (page)
+	if (page && !zero_filled)
 		zcache_free_page(page);
 }
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page
  2013-03-15  2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
  2013-03-15  2:34 ` [PATCH v3 1/5] introduce zero filled pages handler Wanpeng Li
  2013-03-15  2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
@ 2013-03-15  2:34 ` Wanpeng Li
  2013-03-16 13:11   ` Konrad Rzeszutek Wilk
  2013-03-15  2:34 ` [PATCH v3 4/5] introduce zero-filled page stat count Wanpeng Li
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Wanpeng Li @ 2013-03-15  2:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Andrew Morton
  Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
	Minchan Kim, linux-mm, linux-kernel, Wanpeng Li

Increment/decrement zcache_[eph|pers]_zpages for zero-filled pages,
the main point of the counters for zpages and pageframes is to be 
able to calculate density == zpages/pageframes. A zero-filled page 
becomes a zpage that "compresses" to zero bytes and, as a result, 
requires zero pageframes for storage. So the zpages counter should 
be increased but the pageframes counter should not.

[Dan Magenheimer <dan.magenheimer@oracle.com>: patch description]
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
 drivers/staging/zcache/zcache-main.c |    7 ++++++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index 6c35c7d..ef8c960 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -863,6 +863,8 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
 	if (pampd == (void *)ZERO_FILLED) {
 		handle_zero_filled_page(data);
 		zero_filled = true;
+		zsize = 0;
+		zpages = 1;
 		if (!raw)
 			*sizep = PAGE_SIZE;
 		goto zero_fill;
@@ -917,8 +919,11 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
 
 	BUG_ON(preemptible());
 
-	if (pampd == (void *)ZERO_FILLED)
+	if (pampd == (void *)ZERO_FILLED) {
 		zero_filled = true;
+		zsize = 0;
+		zpages = 1;
+	}
 
 	if (pampd_is_remote(pampd) && !zero_filled) {
 
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 4/5] introduce zero-filled page stat count
  2013-03-15  2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
                   ` (2 preceding siblings ...)
  2013-03-15  2:34 ` [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page Wanpeng Li
@ 2013-03-15  2:34 ` Wanpeng Li
  2013-03-15  2:34 ` [PATCH v3 5/5] clean TODO list Wanpeng Li
  2013-03-19  0:23 ` [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Greg Kroah-Hartman
  5 siblings, 0 replies; 10+ messages in thread
From: Wanpeng Li @ 2013-03-15  2:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Andrew Morton
  Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
	Minchan Kim, linux-mm, linux-kernel, Wanpeng Li

Introduce zero-filled page statistics to monitor the number of
zero-filled pages.

Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
 drivers/staging/zcache/zcache-main.c |    7 +++++++
 1 files changed, 7 insertions(+), 0 deletions(-)

diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index ef8c960..bc7ccbb 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -197,6 +197,7 @@ static ssize_t zcache_eph_nonactive_puts_ignored;
 static ssize_t zcache_pers_nonactive_puts_ignored;
 static ssize_t zcache_writtenback_pages;
 static ssize_t zcache_outstanding_writeback_pages;
+static ssize_t zcache_zero_filled_pages;
 
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
@@ -258,6 +259,7 @@ static int zcache_debugfs_init(void)
 	zdfs("outstanding_writeback_pages", S_IRUGO, root,
 				&zcache_outstanding_writeback_pages);
 	zdfs("writtenback_pages", S_IRUGO, root, &zcache_writtenback_pages);
+	zdfs("zero_filled_pages", S_IRUGO, root, &zcache_zero_filled_pages);
 	return 0;
 }
 #undef	zdebugfs
@@ -327,6 +329,7 @@ void zcache_dump(void)
 	pr_info("zcache: outstanding_writeback_pages=%zd\n",
 				zcache_outstanding_writeback_pages);
 	pr_info("zcache: writtenback_pages=%zd\n", zcache_writtenback_pages);
+	pr_info("zcache: zero_filled_pages=%zd\n", zcache_zero_filled_pages);
 }
 #endif
 
@@ -563,6 +566,7 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
 		kunmap_atomic(user_mem);
 		clen = 0;
 		zero_filled = true;
+		zcache_zero_filled_pages++;
 		goto got_pampd;
 	}
 	kunmap_atomic(user_mem);
@@ -646,6 +650,7 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
 		kunmap_atomic(user_mem);
 		clen = 0;
 		zero_filled = true;
+		zcache_zero_filled_pages++;
 		goto got_pampd;
 	}
 	kunmap_atomic(user_mem);
@@ -867,6 +872,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
 		zpages = 1;
 		if (!raw)
 			*sizep = PAGE_SIZE;
+		zcache_zero_filled_pages--;
 		goto zero_fill;
 	}
 
@@ -923,6 +929,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
 		zero_filled = true;
 		zsize = 0;
 		zpages = 1;
+		zcache_zero_filled_pages--;
 	}
 
 	if (pampd_is_remote(pampd) && !zero_filled) {
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 5/5] clean TODO list
  2013-03-15  2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
                   ` (3 preceding siblings ...)
  2013-03-15  2:34 ` [PATCH v3 4/5] introduce zero-filled page stat count Wanpeng Li
@ 2013-03-15  2:34 ` Wanpeng Li
  2013-03-19  0:23 ` [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Greg Kroah-Hartman
  5 siblings, 0 replies; 10+ messages in thread
From: Wanpeng Li @ 2013-03-15  2:34 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Andrew Morton
  Cc: Dan Magenheimer, Seth Jennings, Konrad Rzeszutek Wilk,
	Minchan Kim, linux-mm, linux-kernel, Wanpeng Li

Cleanup TODO list since support zero-filled pages more efficiently has 
already done by this patchset.

Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
 drivers/staging/zcache/TODO |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/zcache/TODO b/drivers/staging/zcache/TODO
index c1e26d4..9e755d3 100644
--- a/drivers/staging/zcache/TODO
+++ b/drivers/staging/zcache/TODO
@@ -65,5 +65,4 @@ ZCACHE FUTURE NEW FUNCTIONALITY
 
 A. Support zsmalloc as an alternative high-density allocator
     (See https://lkml.org/lkml/2013/1/23/511)
-B. Support zero-filled pages more efficiently
-C. Possibly support three zbuds per pageframe when space allows
+B. Possibly support three zbuds per pageframe when space allows
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page
  2013-03-15  2:34 ` [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page Wanpeng Li
@ 2013-03-16 13:11   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 10+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-03-16 13:11 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer,
	Seth Jennings, Minchan Kim, linux-mm, linux-kernel

On Fri, Mar 15, 2013 at 10:34:18AM +0800, Wanpeng Li wrote:
> Increment/decrement zcache_[eph|pers]_zpages for zero-filled pages,
> the main point of the counters for zpages and pageframes is to be 
> able to calculate density == zpages/pageframes. A zero-filled page 
> becomes a zpage that "compresses" to zero bytes and, as a result, 
> requires zero pageframes for storage. So the zpages counter should 
> be increased but the pageframes counter should not.
> 
> [Dan Magenheimer <dan.magenheimer@oracle.com>: patch description]
> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
>  drivers/staging/zcache/zcache-main.c |    7 ++++++-
>  1 files changed, 6 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
> index 6c35c7d..ef8c960 100644
> --- a/drivers/staging/zcache/zcache-main.c
> +++ b/drivers/staging/zcache/zcache-main.c
> @@ -863,6 +863,8 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>  	if (pampd == (void *)ZERO_FILLED) {
>  		handle_zero_filled_page(data);
>  		zero_filled = true;
> +		zsize = 0;
> +		zpages = 1;
>  		if (!raw)
>  			*sizep = PAGE_SIZE;
>  		goto zero_fill;
> @@ -917,8 +919,11 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>  
>  	BUG_ON(preemptible());
>  
> -	if (pampd == (void *)ZERO_FILLED)
> +	if (pampd == (void *)ZERO_FILLED) {
>  		zero_filled = true;
> +		zsize = 0;
> +		zpages = 1;
> +	}
>  
>  	if (pampd_is_remote(pampd) && !zero_filled) {
>  
> -- 
> 1.7.7.6
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/5] zero-filled pages awareness
  2013-03-15  2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
@ 2013-03-16 14:12   ` Bob Liu
  2013-03-19  0:50   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 10+ messages in thread
From: Bob Liu @ 2013-03-16 14:12 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Greg Kroah-Hartman, Andrew Morton, Dan Magenheimer,
	Seth Jennings, Konrad Rzeszutek Wilk, Minchan Kim, linux-mm,
	linux-kernel


On 03/15/2013 10:34 AM, Wanpeng Li wrote:
> Compression of zero-filled pages can unneccessarily cause internal
> fragmentation, and thus waste memory. This special case can be
> optimized.
> 
> This patch captures zero-filled pages, and marks their corresponding
> zcache backing page entry as zero-filled. Whenever such zero-filled
> page is retrieved, we fill the page frame with zero.
> 
> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
>  drivers/staging/zcache/zcache-main.c |   81 +++++++++++++++++++++++++++++++---
>  1 files changed, 75 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
> index d73dd4b..6c35c7d 100644
> --- a/drivers/staging/zcache/zcache-main.c
> +++ b/drivers/staging/zcache/zcache-main.c
> @@ -59,6 +59,12 @@ static inline void frontswap_tmem_exclusive_gets(bool b)
>  }
>  #endif
>  
> +/*
> + * mark pampd to special value in order that later
> + * retrieve will identify zero-filled pages
> + */
> +#define ZERO_FILLED 0x2
> +
>  /* enable (or fix code) when Seth's patches are accepted upstream */
>  #define zcache_writeback_enabled 0
>  
> @@ -543,7 +549,23 @@ static void *zcache_pampd_eph_create(char *data, size_t size, bool raw,
>  {
>  	void *pampd = NULL, *cdata = data;
>  	unsigned clen = size;
> +	bool zero_filled = false;
>  	struct page *page = (struct page *)(data), *newpage;
> +	char *user_mem;
> +
> +	user_mem = kmap_atomic(page);
> +
> +	/*
> +	 * Compressing zero-filled pages will waste memory and introduce
> +	 * serious fragmentation, skip it to avoid overhead
> +	 */
> +	if (page_is_zero_filled(user_mem)) {
> +		kunmap_atomic(user_mem);
> +		clen = 0;
> +		zero_filled = true;
> +		goto got_pampd;
> +	}
> +	kunmap_atomic(user_mem);
>  
>  	if (!raw) {
>  		zcache_compress(page, &cdata, &clen);
> @@ -592,6 +614,8 @@ got_pampd:
>  		zcache_eph_zpages_max = zcache_eph_zpages;
>  	if (ramster_enabled && raw)
>  		ramster_count_foreign_pages(true, 1);
> +	if (zero_filled)
> +		pampd = (void *)ZERO_FILLED;
>  out:
>  	return pampd;
>  }
> @@ -601,14 +625,31 @@ static void *zcache_pampd_pers_create(char *data, size_t size, bool raw,
>  {
>  	void *pampd = NULL, *cdata = data;
>  	unsigned clen = size;
> +	bool zero_filled = false;
>  	struct page *page = (struct page *)(data), *newpage;
>  	unsigned long zbud_mean_zsize;
>  	unsigned long curr_pers_zpages, total_zsize;
> +	char *user_mem;
>  
>  	if (data == NULL) {
>  		BUG_ON(!ramster_enabled);
>  		goto create_pampd;
>  	}
> +
> +	user_mem = kmap_atomic(page);
> +
> +	/*
> +	 * Compressing zero-filled pages will waste memory and introduce
> +	 * serious fragmentation, skip it to avoid overhead
> +	 */
> +	if (page_is_zero_filled(page)) {
> +		kunmap_atomic(user_mem);
> +		clen = 0;
> +		zero_filled = true;
> +		goto got_pampd;
> +	}
> +	kunmap_atomic(user_mem);
> +

Maybe we can add a function for this code? It seems a bit duplicated.

>  	curr_pers_zpages = zcache_pers_zpages;
>  /* FIXME CONFIG_RAMSTER... subtract atomic remote_pers_pages here? */
>  	if (!raw)
> @@ -674,6 +715,8 @@ got_pampd:
>  		zcache_pers_zbytes_max = zcache_pers_zbytes;
>  	if (ramster_enabled && raw)
>  		ramster_count_foreign_pages(false, 1);
> +	if (zero_filled)
> +		pampd = (void *)ZERO_FILLED;
>  out:
>  	return pampd;
>  }
> @@ -735,7 +778,8 @@ out:
>   */
>  void zcache_pampd_create_finish(void *pampd, bool eph)
>  {
> -	zbud_create_finish((struct zbudref *)pampd, eph);
> +	if (pampd != (void *)ZERO_FILLED)
> +		zbud_create_finish((struct zbudref *)pampd, eph);
>  }
>  
>  /*
> @@ -780,6 +824,14 @@ static int zcache_pampd_get_data(char *data, size_t *sizep, bool raw,
>  	BUG_ON(preemptible());
>  	BUG_ON(eph);	/* fix later if shared pools get implemented */
>  	BUG_ON(pampd_is_remote(pampd));
> +
> +	if (pampd == (void *)ZERO_FILLED) {
> +		handle_zero_filled_page(data);
> +		if (!raw)
> +			*sizep = PAGE_SIZE;
> +		return 0;
> +	}
> +
>  	if (raw)
>  		ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
>  						sizep, eph);
> @@ -801,12 +853,21 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>  					struct tmem_oid *oid, uint32_t index)
>  {
>  	int ret;
> -	bool eph = !is_persistent(pool);
> +	bool eph = !is_persistent(pool), zero_filled = false;
>  	struct page *page = NULL;
>  	unsigned int zsize, zpages;
>  
>  	BUG_ON(preemptible());
>  	BUG_ON(pampd_is_remote(pampd));
> +
> +	if (pampd == (void *)ZERO_FILLED) {
> +		handle_zero_filled_page(data);
> +		zero_filled = true;
> +		if (!raw)
> +			*sizep = PAGE_SIZE;
> +		goto zero_fill;
> +	}
> +
>  	if (raw)
>  		ret = zbud_copy_from_zbud(data, (struct zbudref *)pampd,
>  						sizep, eph);
> @@ -818,6 +879,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>  	}
>  	page = zbud_free_and_delist((struct zbudref *)pampd, eph,
>  					&zsize, &zpages);
> +zero_fill:
>  	if (eph) {
>  		if (page)
>  			zcache_eph_pageframes =
> @@ -837,7 +899,7 @@ static int zcache_pampd_get_data_and_free(char *data, size_t *sizep, bool raw,
>  	}
>  	if (!is_local_client(pool->client))
>  		ramster_count_foreign_pages(eph, -1);
> -	if (page)
> +	if (page && !zero_filled)
>  		zcache_free_page(page);
>  	return ret;
>  }
> @@ -851,16 +913,23 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>  {
>  	struct page *page = NULL;
>  	unsigned int zsize, zpages;
> +	bool zero_filled = false;
>  
>  	BUG_ON(preemptible());
> -	if (pampd_is_remote(pampd)) {
> +
> +	if (pampd == (void *)ZERO_FILLED)
> +		zero_filled = true;
> +
> +	if (pampd_is_remote(pampd) && !zero_filled) {
> +
>  		BUG_ON(!ramster_enabled);
>  		pampd = ramster_pampd_free(pampd, pool, oid, index, acct);
>  		if (pampd == NULL)
>  			return;
>  	}
>  	if (is_ephemeral(pool)) {
> -		page = zbud_free_and_delist((struct zbudref *)pampd,
> +		if (!zero_filled)
> +			page = zbud_free_and_delist((struct zbudref *)pampd,
>  						true, &zsize, &zpages);
>  		if (page)
>  			zcache_eph_pageframes =
> @@ -883,7 +952,7 @@ static void zcache_pampd_free(void *pampd, struct tmem_pool *pool,
>  	}
>  	if (!is_local_client(pool->client))
>  		ramster_count_foreign_pages(is_ephemeral(pool), -1);
> -	if (page)
> +	if (page && !zero_filled)
>  		zcache_free_page(page);
>  }
>  
> 

-- 
Regards,
-Bob

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently
  2013-03-15  2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
                   ` (4 preceding siblings ...)
  2013-03-15  2:34 ` [PATCH v3 5/5] clean TODO list Wanpeng Li
@ 2013-03-19  0:23 ` Greg Kroah-Hartman
  5 siblings, 0 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2013-03-19  0:23 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
	Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel

On Fri, Mar 15, 2013 at 10:34:15AM +0800, Wanpeng Li wrote:
> Changelog:
>  v2 -> v3:
>   * increment/decrement zcache_[eph|pers]_zpages for zero-filled pages, spotted by Dan 
>   * replace "zero" or "zero page" by "zero_filled_page", spotted by Dan
>  v1 -> v2:
>   * avoid changing tmem.[ch] entirely, spotted by Dan.
>   * don't accumulate [eph|pers]pageframe and [eph|pers]zpages for 
>     zero-filled pages, spotted by Dan
>   * cleanup TODO list
>   * add Dan Acked-by.

In the future, please make the subject: lines have "staging: zcache:" in
them, so I don't have to edit them by hand.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/5] zero-filled pages awareness
  2013-03-15  2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
  2013-03-16 14:12   ` Bob Liu
@ 2013-03-19  0:50   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 10+ messages in thread
From: Greg Kroah-Hartman @ 2013-03-19  0:50 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Andrew Morton, Dan Magenheimer, Seth Jennings,
	Konrad Rzeszutek Wilk, Minchan Kim, linux-mm, linux-kernel

On Fri, Mar 15, 2013 at 10:34:17AM +0800, Wanpeng Li wrote:
> Compression of zero-filled pages can unneccessarily cause internal
> fragmentation, and thus waste memory. This special case can be
> optimized.
> 
> This patch captures zero-filled pages, and marks their corresponding
> zcache backing page entry as zero-filled. Whenever such zero-filled
> page is retrieved, we fill the page frame with zero.
> 
> Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>

This patch applies with a bunch of fuzz, meaning it wasn't made against
the latest tree, which worries me.  Care to redo it, and the rest of the
series, and resend it?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-03-19  0:49 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-15  2:34 [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Wanpeng Li
2013-03-15  2:34 ` [PATCH v3 1/5] introduce zero filled pages handler Wanpeng Li
2013-03-15  2:34 ` [PATCH v3 2/5] zero-filled pages awareness Wanpeng Li
2013-03-16 14:12   ` Bob Liu
2013-03-19  0:50   ` Greg Kroah-Hartman
2013-03-15  2:34 ` [PATCH v3 3/5] handle zcache_[eph|pers]_zpages for zero-filled page Wanpeng Li
2013-03-16 13:11   ` Konrad Rzeszutek Wilk
2013-03-15  2:34 ` [PATCH v3 4/5] introduce zero-filled page stat count Wanpeng Li
2013-03-15  2:34 ` [PATCH v3 5/5] clean TODO list Wanpeng Li
2013-03-19  0:23 ` [PATCH v3 0/5] zcache: Support zero-filled pages more efficiently Greg Kroah-Hartman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).