linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv4 0/7] zswap: compressed swap caching
@ 2013-01-29 21:40 Seth Jennings
  2013-01-29 21:40 ` [PATCHv4 1/7] debugfs: add get/set for atomic types Seth Jennings
                   ` (9 more replies)
  0 siblings, 10 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

Sorry for the churn but just this set might be easier to review.
The code required for the flushing is in a separate patch now
as requested.

Changelog:

v4:
* Added Acks (Minchan)
* Separated flushing functionality into standalone patch
  for easier review (Minchan)
* fix comment on zswap enabled attribute (Minchan)
* add TODO for dynamic mempool size (Minchan)
* and check for NULL in zswap_free_page() (Minchan)
* add missing zs_free() in error path (Minchan)
* TODO: add comments for flushing/refcounting (Minchan)

NOTE: To build, read this:
http://lkml.org/lkml/2013/1/28/586

v3:
* Dropped the zsmalloc patches from the set, except the promotion patch
  which has be converted to a rename patch (vs full diff).  The dropped
  patches have been Acked and are going into Greg's staging tree soon.
* Separated [PATCHv2 7/9] into two patches since it makes changes for two
  different reasons (Minchan)
* Moved ZSWAP_MAX_OUTSTANDING_FLUSHES near the top in zswap.c (Rik)
* Rebase to v3.8-rc5. linux-next is a little volatile with the
  swapper_space per type changes which will effect this patchset.
* TODO: Move some stats from debugfs to sysfs. Which ones? (Rik)

v2:
* Rename zswap_fs_* functions to zswap_frontswap_* to avoid
  confusion with "filesystem"
* Add comment about what the tree lock protects
* Remove "#if 0" code (should have been done before)
* Break out changes to existing swap code into separate patch
* Fix blank line EOF warning on documentation file
* Rebase to next-20130107

Zswap Overview:

Zswap is a lightweight compressed cache for swap pages. It takes
pages that are in the process of being swapped out and attempts to
compress them into a dynamically allocated RAM-based memory pool.
If this process is successful, the writeback to the swap device is
deferred and, in many cases, avoided completely.  This results in
a significant I/O reduction and performance gains for systems that
are swapping.

The results of a kernel building benchmark indicate a
runtime reduction of 53% and an I/O reduction 76% with zswap vs normal
swapping with a kernel build under heavy memory pressure (see
Performance section for more).

Some addition performance metrics regarding the performance
improvements and I/O reductions that can be achieved using zswap as
measured by SPECjbb are provided here:

http://ibm.co/VCgHvM

These results include runs on x86 and new results on Power7+ with
hardware compression acceleration.

Of particular note is that zswap is able to evict pages from the compressed
cache, on an LRU basis, to the backing swap device when the compressed pool
reaches it size limit or the pool is unable to obtain additional pages
from the buddy allocator.  This eviction functionality had been identified
as a requirement in prior community discussions.

Patchset Structure:
1:   add atomic_t get/set to debugfs
2:   promote zsmalloc to /lib
3,4: changes to existing swap code for zswap
5,6: add zswap and documentation

Rationale:

Zswap provides compressed swap caching that basically trades CPU cycles
for reduced swap I/O.  This trade-off can result in a significant
performance improvement as reads to/writes from to the compressed
cache almost always faster that reading from a swap device
which incurs the latency of an asynchronous block I/O read.

Some potential benefits:
* Desktop/laptop users with limited RAM capacities can mitigate the
    performance impact of swapping.
* Overcommitted guests that share a common I/O resource can
    dramatically reduce their swap I/O pressure, avoiding heavy
    handed I/O throttling by the hypervisor.  This allows more work
    to get done with less impact to the guest workload and guests
    sharing the I/O subsystem
* Users with SSDs as swap devices can extend the life of the device by
    drastically reducing life-shortening writes.

Compressed swap is also provided in zcache, along with page cache
compression and RAM clustering through RAMSter. Zswap seeks to deliver
the benefit of swap  compression to users in a discrete function.
This design decision is akin to Unix design philosophy of doing one
thing well, it leaves file cache compression and other features
for separate code.

Design:

Zswap receives pages for compression through the Frontswap API and
is able to evict pages from its own compressed pool on an LRU basis
and write them back to the backing swap device in the case that the
compressed pool is full or unable to secure additional pages from
the buddy allocator.

Zswap makes use of zsmalloc for the managing the compressed memory
pool.  This is because zsmalloc is specifically designed to minimize
fragmentation on large (> PAGE_SIZE/2) allocation sizes.  Each
allocation in zsmalloc is not directly accessible by address.
Rather, a handle is return by the allocation routine and that handle
must be mapped before being accessed.  The compressed memory pool grows
on demand and shrinks as compressed pages are freed.  The pool is
not preallocated.

When a swap page is passed from frontswap to zswap, zswap maintains
a mapping of the swap entry, a combination of the swap type and swap
offset, to the zsmalloc handle that references that compressed swap
page.  This mapping is achieved with a red-black tree per swap type.
The swap offset is the search key for the tree nodes.

Zswap seeks to be simple in its policies.  Sysfs attributes allow for
two user controlled policies:
* max_compression_ratio - Maximum compression ratio, as as percentage,
    for an acceptable compressed page. Any page that does not compress
    by at least this ratio will be rejected.
* max_pool_percent - The maximum percentage of memory that the compressed
    pool can occupy.

To enabled zswap, the "enabled" attribute must be set to 1 at boot time.

Zswap allows the compressor to be selected at kernel boot time by
setting the “compressor” attribute.  The default compressor is lzo.

A debugfs interface is provided for various statistic about pool size,
number of pages stored, and various counters for the reasons pages
are rejected.

Performance, Kernel Building:

Setup
========
Gentoo w/ kernel v3.7-rc7
Quad-core i5-2500 @ 3.3GHz
512MB DDR3 1600MHz (limited with mem=512m on boot)
Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
majflt are major page faults reported by the time command
pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
the make -jN

Summary
========
* Zswap reduces I/O and improves performance at all swap pressure levels.

* Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
  over 1.5GB of I/O, and cut runtime in half.

Details
========
I/O (in pages)
	base				zswap				change	change
N	pswpin	pswpout	majflt	I/O sum	pswpin	pswpout	majflt	I/O sum	%I/O	MB
8	1	335	291	627	0	0	249	249	-60%	1
12	3688	14315	5290	23293	123	860	5954	6937	-70%	64
16	12711	46179	16803	75693	2936	7390	46092	56418	-25%	75
20	42178	133781	49898	225857	9460	28382	92951	130793	-42%	371
24	96079	357280	105242	558601	7719	18484	109309	135512	-76%	1653

Runtime (in seconds)
N	base	zswap	%change
8	107	107	0%
12	128	110	-14%
16	191	179	-6%
20	371	240	-35%
24	570	267	-53%

%CPU utilization (out of 400% on 4 cpus)
N	base	zswap	%change
8	317	319	1%
12	267	311	16%
16	179	191	7%
20	94	143	52%
24	60	128	113%


Seth Jennings (7):
  debugfs: add get/set for atomic types
  zsmalloc: promote to lib/
  zswap: add to mm/
  mm: break up swap_writepage() for frontswap backends
  mm: allow for outstanding swap writeback accounting
  zswap: add flushing support
  zswap: add documentation

 Documentation/vm/zswap.txt                         |   73 ++
 drivers/staging/Kconfig                            |    2 -
 drivers/staging/Makefile                           |    1 -
 drivers/staging/zcache/zcache-main.c               |    3 +-
 drivers/staging/zram/zram_drv.h                    |    3 +-
 drivers/staging/zsmalloc/Kconfig                   |   10 -
 drivers/staging/zsmalloc/Makefile                  |    3 -
 fs/debugfs/file.c                                  |   42 +
 include/linux/debugfs.h                            |    2 +
 include/linux/swap.h                               |    4 +
 .../staging/zsmalloc => include/linux}/zsmalloc.h  |    0
 lib/Kconfig                                        |   18 +
 lib/Makefile                                       |    1 +
 .../zsmalloc/zsmalloc-main.c => lib/zsmalloc.c     |    3 +-
 mm/Kconfig                                         |   15 +
 mm/Makefile                                        |    1 +
 mm/page_io.c                                       |   22 +-
 mm/swap_state.c                                    |    2 +-
 mm/zswap.c                                         | 1073 ++++++++++++++++++++
 19 files changed, 1250 insertions(+), 28 deletions(-)
 create mode 100644 Documentation/vm/zswap.txt
 delete mode 100644 drivers/staging/zsmalloc/Kconfig
 delete mode 100644 drivers/staging/zsmalloc/Makefile
 rename {drivers/staging/zsmalloc => include/linux}/zsmalloc.h (100%)
 rename drivers/staging/zsmalloc/zsmalloc-main.c => lib/zsmalloc.c (99%)
 create mode 100644 mm/zswap.c

-- 
1.8.1.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCHv4 1/7] debugfs: add get/set for atomic types
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
@ 2013-01-29 21:40 ` Seth Jennings
  2013-01-29 21:40 ` [PATCHv4 2/7] zsmalloc: promote to lib/ Seth Jennings
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

debugfs currently lack the ability to create attributes
that set/get atomic_t values.

This patch adds support for this through a new
debugfs_create_atomic_t() function.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 fs/debugfs/file.c       | 42 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/debugfs.h |  2 ++
 2 files changed, 44 insertions(+)

diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
index c5ca6ae..fa26d5b 100644
--- a/fs/debugfs/file.c
+++ b/fs/debugfs/file.c
@@ -21,6 +21,7 @@
 #include <linux/debugfs.h>
 #include <linux/io.h>
 #include <linux/slab.h>
+#include <linux/atomic.h>
 
 static ssize_t default_read_file(struct file *file, char __user *buf,
 				 size_t count, loff_t *ppos)
@@ -403,6 +404,47 @@ struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
 }
 EXPORT_SYMBOL_GPL(debugfs_create_size_t);
 
+static int debugfs_atomic_t_set(void *data, u64 val)
+{
+	atomic_set((atomic_t *)data, val);
+	return 0;
+}
+static int debugfs_atomic_t_get(void *data, u64 *val)
+{
+	*val = atomic_read((atomic_t *)data);
+	return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t, debugfs_atomic_t_get,
+			debugfs_atomic_t_set, "%llu\n");
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_ro, debugfs_atomic_t_get, NULL, "%llu\n");
+DEFINE_SIMPLE_ATTRIBUTE(fops_atomic_t_wo, NULL, debugfs_atomic_t_set, "%llu\n");
+
+/**
+ * debugfs_create_atomic_t - create a debugfs file that is used to read and
+ * write an atomic_t value
+ * @name: a pointer to a string containing the name of the file to create.
+ * @mode: the permission that the file should have
+ * @parent: a pointer to the parent dentry for this file.  This should be a
+ *          directory dentry if set.  If this parameter is %NULL, then the
+ *          file will be created in the root of the debugfs filesystem.
+ * @value: a pointer to the variable that the file should read to and write
+ *         from.
+ */
+struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
+				 struct dentry *parent, atomic_t *value)
+{
+	/* if there are no write bits set, make read only */
+	if (!(mode & S_IWUGO))
+		return debugfs_create_file(name, mode, parent, value,
+					&fops_atomic_t_ro);
+	/* if there are no read bits set, make write only */
+	if (!(mode & S_IRUGO))
+		return debugfs_create_file(name, mode, parent, value,
+					&fops_atomic_t_wo);
+
+	return debugfs_create_file(name, mode, parent, value, &fops_atomic_t);
+}
+EXPORT_SYMBOL_GPL(debugfs_create_atomic_t);
 
 static ssize_t read_file_bool(struct file *file, char __user *user_buf,
 			      size_t count, loff_t *ppos)
diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
index 66c434f..51fea70 100644
--- a/include/linux/debugfs.h
+++ b/include/linux/debugfs.h
@@ -79,6 +79,8 @@ struct dentry *debugfs_create_x64(const char *name, umode_t mode,
 				  struct dentry *parent, u64 *value);
 struct dentry *debugfs_create_size_t(const char *name, umode_t mode,
 				     struct dentry *parent, size_t *value);
+struct dentry *debugfs_create_atomic_t(const char *name, umode_t mode,
+				     struct dentry *parent, atomic_t *value);
 struct dentry *debugfs_create_bool(const char *name, umode_t mode,
 				  struct dentry *parent, u32 *value);
 
-- 
1.8.1.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCHv4 2/7] zsmalloc: promote to lib/
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
  2013-01-29 21:40 ` [PATCHv4 1/7] debugfs: add get/set for atomic types Seth Jennings
@ 2013-01-29 21:40 ` Seth Jennings
  2013-01-29 22:51   ` Andrew Morton
  2013-01-29 21:40 ` [PATCHv4 3/7] zswap: add to mm/ Seth Jennings
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

This patch promotes the slab-based zsmalloc memory allocator
from the staging tree to lib/

zswap depends on this allocator for storing compressed RAM pages
in an efficient way under system wide memory pressure where
high-order (greater than 0) page allocation are very likely to
fail.

For more information on zsmalloc and its internals, read the
documentation at the top of the zsmalloc.c file.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Minchan Kim <minchan@kernel.org>
--
This patch is similar to a patch Minchan has on out on the list
to promote for use in zram.
---
 drivers/staging/Kconfig                                |  2 --
 drivers/staging/Makefile                               |  1 -
 drivers/staging/zcache/zcache-main.c                   |  3 +--
 drivers/staging/zram/zram_drv.h                        |  3 +--
 drivers/staging/zsmalloc/Kconfig                       | 10 ----------
 drivers/staging/zsmalloc/Makefile                      |  3 ---
 {drivers/staging/zsmalloc => include/linux}/zsmalloc.h |  0
 lib/Kconfig                                            | 18 ++++++++++++++++++
 lib/Makefile                                           |  1 +
 .../staging/zsmalloc/zsmalloc-main.c => lib/zsmalloc.c |  3 +--
 10 files changed, 22 insertions(+), 22 deletions(-)
 delete mode 100644 drivers/staging/zsmalloc/Kconfig
 delete mode 100644 drivers/staging/zsmalloc/Makefile
 rename {drivers/staging/zsmalloc => include/linux}/zsmalloc.h (100%)
 rename drivers/staging/zsmalloc/zsmalloc-main.c => lib/zsmalloc.c (99%)

diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index 329bdb4..c0a7918 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -76,8 +76,6 @@ source "drivers/staging/zram/Kconfig"
 
 source "drivers/staging/zcache/Kconfig"
 
-source "drivers/staging/zsmalloc/Kconfig"
-
 source "drivers/staging/wlags49_h2/Kconfig"
 
 source "drivers/staging/wlags49_h25/Kconfig"
diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
index c7ec486..1572fe5 100644
--- a/drivers/staging/Makefile
+++ b/drivers/staging/Makefile
@@ -32,7 +32,6 @@ obj-$(CONFIG_DX_SEP)            += sep/
 obj-$(CONFIG_IIO)		+= iio/
 obj-$(CONFIG_ZRAM)		+= zram/
 obj-$(CONFIG_ZCACHE)		+= zcache/
-obj-$(CONFIG_ZSMALLOC)		+= zsmalloc/
 obj-$(CONFIG_WLAGS49_H2)	+= wlags49_h2/
 obj-$(CONFIG_WLAGS49_H25)	+= wlags49_h25/
 obj-$(CONFIG_FB_SM7XX)		+= sm7xxfb/
diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c
index 52b43b7..75c08c5 100644
--- a/drivers/staging/zcache/zcache-main.c
+++ b/drivers/staging/zcache/zcache-main.c
@@ -32,10 +32,9 @@
 #include <linux/crypto.h>
 #include <linux/string.h>
 #include <linux/idr.h>
+#include <linux/zsmalloc.h>
 #include "tmem.h"
 
-#include "../zsmalloc/zsmalloc.h"
-
 #ifdef CONFIG_CLEANCACHE
 #include <linux/cleancache.h>
 #endif
diff --git a/drivers/staging/zram/zram_drv.h b/drivers/staging/zram/zram_drv.h
index df2eec4..1e72965 100644
--- a/drivers/staging/zram/zram_drv.h
+++ b/drivers/staging/zram/zram_drv.h
@@ -17,8 +17,7 @@
 
 #include <linux/spinlock.h>
 #include <linux/mutex.h>
-
-#include "../zsmalloc/zsmalloc.h"
+#include <linux/zsmalloc.h>
 
 /*
  * Some arbitrary value. This is just to catch
diff --git a/drivers/staging/zsmalloc/Kconfig b/drivers/staging/zsmalloc/Kconfig
deleted file mode 100644
index 9084565..0000000
--- a/drivers/staging/zsmalloc/Kconfig
+++ /dev/null
@@ -1,10 +0,0 @@
-config ZSMALLOC
-	tristate "Memory allocator for compressed pages"
-	default n
-	help
-	  zsmalloc is a slab-based memory allocator designed to store
-	  compressed RAM pages.  zsmalloc uses virtual memory mapping
-	  in order to reduce fragmentation.  However, this results in a
-	  non-standard allocator interface where a handle, not a pointer, is
-	  returned by an alloc().  This handle must be mapped in order to
-	  access the allocated space.
diff --git a/drivers/staging/zsmalloc/Makefile b/drivers/staging/zsmalloc/Makefile
deleted file mode 100644
index b134848..0000000
--- a/drivers/staging/zsmalloc/Makefile
+++ /dev/null
@@ -1,3 +0,0 @@
-zsmalloc-y 		:= zsmalloc-main.o
-
-obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
diff --git a/drivers/staging/zsmalloc/zsmalloc.h b/include/linux/zsmalloc.h
similarity index 100%
rename from drivers/staging/zsmalloc/zsmalloc.h
rename to include/linux/zsmalloc.h
diff --git a/lib/Kconfig b/lib/Kconfig
index 75cdb77..fdab273 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -219,6 +219,24 @@ config DECOMPRESS_LZO
 config GENERIC_ALLOCATOR
 	boolean
 
+config ZSMALLOC
+	tristate "Memory allocator for compressed pages"
+	default n
+	help
+	  zsmalloc is a slab-based memory allocator designed to store
+	  compressed RAM pages.  zsmalloc uses a memory pool that combines
+	  single pages into higher order pages by linking them together
+	  using the fields of the struct page. Allocations are then
+	  mapped through copy buffers or VM mapping, in order to reduce
+	  memory pool fragmentation and increase allocation success rate under
+	  memory pressure.
+
+	  This results in a non-standard allocator interface where
+	  a handle, not a pointer, is returned by the allocation function.
+	  This handle must be mapped in order to access the allocated space.
+
+	  If unsure, say N.
+
 #
 # reed solomon support is select'ed if needed
 #
diff --git a/lib/Makefile b/lib/Makefile
index 02ed6c0..70b0892 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -65,6 +65,7 @@ obj-$(CONFIG_CRC7)	+= crc7.o
 obj-$(CONFIG_LIBCRC32C)	+= libcrc32c.o
 obj-$(CONFIG_CRC8)	+= crc8.o
 obj-$(CONFIG_GENERIC_ALLOCATOR) += genalloc.o
+obj-$(CONFIG_ZSMALLOC) += zsmalloc.o
 
 obj-$(CONFIG_ZLIB_INFLATE) += zlib_inflate/
 obj-$(CONFIG_ZLIB_DEFLATE) += zlib_deflate/
diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c b/lib/zsmalloc.c
similarity index 99%
rename from drivers/staging/zsmalloc/zsmalloc-main.c
rename to lib/zsmalloc.c
index 13018b7..d5146c7 100644
--- a/drivers/staging/zsmalloc/zsmalloc-main.c
+++ b/lib/zsmalloc.c
@@ -78,8 +78,7 @@
 #include <linux/hardirq.h>
 #include <linux/spinlock.h>
 #include <linux/types.h>
-
-#include "zsmalloc.h"
+#include <linux/zsmalloc.h>
 
 /*
  * This must be power of 2 and greater than of equal to sizeof(link_free).
-- 
1.8.1.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCHv4 3/7] zswap: add to mm/
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
  2013-01-29 21:40 ` [PATCHv4 1/7] debugfs: add get/set for atomic types Seth Jennings
  2013-01-29 21:40 ` [PATCHv4 2/7] zsmalloc: promote to lib/ Seth Jennings
@ 2013-01-29 21:40 ` Seth Jennings
  2013-01-31  7:07   ` Minchan Kim
  2013-01-29 21:40 ` [PATCHv4 4/7] mm: break up swap_writepage() for frontswap backends Seth Jennings
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

zswap is a thin compression backend for frontswap. It receives
pages from frontswap and attempts to store them in a compressed
memory pool, resulting in an effective partial memory reclaim and
dramatically reduced swap device I/O.

Additionally, in most cases, pages can be retrieved from this
compressed store much more quickly than reading from tradition
swap devices resulting in faster performance for many workloads.

This patch adds the zswap driver to mm/

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
---
 mm/Kconfig  |  15 ++
 mm/Makefile |   1 +
 mm/zswap.c  | 656 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 672 insertions(+)
 create mode 100644 mm/zswap.c

diff --git a/mm/Kconfig b/mm/Kconfig
index 278e3ab..14b9acb 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -446,3 +446,18 @@ config FRONTSWAP
 	  and swap data is stored as normal on the matching swap device.
 
 	  If unsure, say Y to enable frontswap.
+
+config ZSWAP
+	bool "In-kernel swap page compression"
+	depends on FRONTSWAP && CRYPTO
+	select CRYPTO_LZO
+	select ZSMALLOC
+	default n
+	help
+	  Zswap is a backend for the frontswap mechanism in the VMM.
+	  It receives pages from frontswap and attempts to store them
+	  in a compressed memory pool, resulting in an effective
+	  partial memory reclaim.  In addition, pages and be retrieved
+	  from this compressed store much faster than most tradition
+	  swap devices resulting in reduced I/O and faster performance
+	  for many workloads.
diff --git a/mm/Makefile b/mm/Makefile
index 3a46287..1b1ed5c 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
 obj-$(CONFIG_BOUNCE)	+= bounce.o
 obj-$(CONFIG_SWAP)	+= page_io.o swap_state.o swapfile.o
 obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
+obj-$(CONFIG_ZSWAP)	+= zswap.o
 obj-$(CONFIG_HAS_DMA)	+= dmapool.o
 obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
 obj-$(CONFIG_NUMA) 	+= mempolicy.o
diff --git a/mm/zswap.c b/mm/zswap.c
new file mode 100644
index 0000000..a6c2928
--- /dev/null
+++ b/mm/zswap.c
@@ -0,0 +1,656 @@
+/*
+ * zswap-drv.c - zswap driver file
+ *
+ * zswap is a backend for frontswap that takes pages that are in the
+ * process of being swapped out and attempts to compress them and store
+ * them in a RAM-based memory pool.  This results in a significant I/O
+ * reduction on the real swap device and, in the case of a slow swap
+ * device, can also improve workload performance.
+ *
+ * Copyright (C) 2012  Seth Jennings <sjenning@linux.vnet.ibm.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+*/
+
+#include <linux/module.h>
+#include <linux/cpu.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/frontswap.h>
+#include <linux/rbtree.h>
+#include <linux/swap.h>
+#include <linux/crypto.h>
+#include <linux/mempool.h>
+#include <linux/zsmalloc.h>
+
+/*********************************
+* statistics
+**********************************/
+/* Number of memory pages used by the compressed pool */
+static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
+/* The number of compressed pages currently stored in zswap */
+static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
+
+/*
+ * The statistics below are not protected from concurrent access for
+ * performance reasons so they may not be a 100% accurate.  However,
+ * the do provide useful information on roughly how many times a
+ * certain event is occurring.
+*/
+static u64 zswap_pool_limit_hit;
+static u64 zswap_reject_compress_poor;
+static u64 zswap_reject_zsmalloc_fail;
+static u64 zswap_reject_kmemcache_fail;
+static u64 zswap_duplicate_entry;
+
+/*********************************
+* tunables
+**********************************/
+/* Enable/disable zswap (disabled by default, fixed at boot for now) */
+static bool zswap_enabled;
+module_param_named(enabled, zswap_enabled, bool, 0);
+
+/* Compressor to be used by zswap (fixed at boot for now) */
+#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
+static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
+module_param_named(compressor, zswap_compressor, charp, 0);
+
+/* The maximum percentage of memory that the compressed pool can occupy */
+static unsigned int zswap_max_pool_percent = 20;
+module_param_named(max_pool_percent,
+			zswap_max_pool_percent, uint, 0644);
+
+/*
+ * Maximum compression ratio, as as percentage, for an acceptable
+ * compressed page. Any pages that do not compress by at least
+ * this ratio will be rejected.
+*/
+static unsigned int zswap_max_compression_ratio = 80;
+module_param_named(max_compression_ratio,
+			zswap_max_compression_ratio, uint, 0644);
+
+/*********************************
+* compression functions
+**********************************/
+/* per-cpu compression transforms */
+static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
+
+enum comp_op {
+	ZSWAP_COMPOP_COMPRESS,
+	ZSWAP_COMPOP_DECOMPRESS
+};
+
+static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
+				u8 *dst, unsigned int *dlen)
+{
+	struct crypto_comp *tfm;
+	int ret;
+
+	tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());
+	switch (op) {
+	case ZSWAP_COMPOP_COMPRESS:
+		ret = crypto_comp_compress(tfm, src, slen, dst, dlen);
+		break;
+	case ZSWAP_COMPOP_DECOMPRESS:
+		ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	put_cpu();
+	return ret;
+}
+
+static int __init zswap_comp_init(void)
+{
+	if (!crypto_has_comp(zswap_compressor, 0, 0)) {
+		pr_info("zswap: %s compressor not available\n",
+			zswap_compressor);
+		/* fall back to default compressor */
+		zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
+		if (!crypto_has_comp(zswap_compressor, 0, 0))
+			/* can't even load the default compressor */
+			return -ENODEV;
+	}
+	pr_info("zswap: using %s compressor\n", zswap_compressor);
+
+	/* alloc percpu transforms */
+	zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
+	if (!zswap_comp_pcpu_tfms)
+		return -ENOMEM;
+	return 0;
+}
+
+static void zswap_comp_exit(void)
+{
+	/* free percpu transforms */
+	if (zswap_comp_pcpu_tfms)
+		free_percpu(zswap_comp_pcpu_tfms);
+}
+
+/*********************************
+* data structures
+**********************************/
+struct zswap_entry {
+	struct rb_node rbnode;
+	unsigned type;
+	pgoff_t offset;
+	unsigned long handle;
+	unsigned int length;
+};
+
+struct zswap_tree {
+	struct rb_root rbroot;
+	spinlock_t lock;
+	struct zs_pool *pool;
+};
+
+static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
+
+/*********************************
+* zswap entry functions
+**********************************/
+#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"
+static struct kmem_cache *zswap_entry_cache;
+
+static inline int zswap_entry_cache_create(void)
+{
+	zswap_entry_cache =
+		kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
+			sizeof(struct zswap_entry), 0, 0, NULL);
+	return (zswap_entry_cache == NULL);
+}
+
+static inline void zswap_entry_cache_destory(void)
+{
+	kmem_cache_destroy(zswap_entry_cache);
+}
+
+static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
+{
+	struct zswap_entry *entry;
+	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
+	if (!entry)
+		return NULL;
+	return entry;
+}
+
+static inline void zswap_entry_cache_free(struct zswap_entry *entry)
+{
+	kmem_cache_free(zswap_entry_cache, entry);
+}
+
+/*********************************
+* rbtree functions
+**********************************/
+static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
+{
+	struct rb_node *node = root->rb_node;
+	struct zswap_entry *entry;
+
+	while (node) {
+		entry = rb_entry(node, struct zswap_entry, rbnode);
+		if (entry->offset > offset)
+			node = node->rb_left;
+		else if (entry->offset < offset)
+			node = node->rb_right;
+		else
+			return entry;
+	}
+	return NULL;
+}
+
+/*
+ * In the case that a entry with the same offset is found, it a pointer to
+ * the existing entry is stored in dupentry and the function returns -EEXIST
+*/
+static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
+			struct zswap_entry **dupentry)
+{
+	struct rb_node **link = &root->rb_node, *parent = NULL;
+	struct zswap_entry *myentry;
+
+	while (*link) {
+		parent = *link;
+		myentry = rb_entry(parent, struct zswap_entry, rbnode);
+		if (myentry->offset > entry->offset)
+			link = &(*link)->rb_left;
+		else if (myentry->offset < entry->offset)
+			link = &(*link)->rb_right;
+		else {
+			*dupentry = myentry;
+			return -EEXIST;
+		}
+	}
+	rb_link_node(&entry->rbnode, parent, link);
+	rb_insert_color(&entry->rbnode, root);
+	return 0;
+}
+
+/*********************************
+* per-cpu code
+**********************************/
+static DEFINE_PER_CPU(u8 *, zswap_dstmem);
+
+static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
+{
+	struct crypto_comp *tfm;
+	u8 *dst;
+
+	switch (action) {
+	case CPU_UP_PREPARE:
+		tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
+		if (IS_ERR(tfm)) {
+			pr_err("zswap: can't allocate compressor transform\n");
+			return NOTIFY_BAD;
+		}
+		*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
+		dst = (u8 *)__get_free_pages(GFP_KERNEL, 1);
+		if (!dst) {
+			pr_err("zswap: can't allocate compressor buffer\n");
+			crypto_free_comp(tfm);
+			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
+			return NOTIFY_BAD;
+		}
+		per_cpu(zswap_dstmem, cpu) = dst;
+		break;
+	case CPU_DEAD:
+	case CPU_UP_CANCELED:
+		tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
+		if (tfm) {
+			crypto_free_comp(tfm);
+			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
+		}
+		dst = per_cpu(zswap_dstmem, cpu);
+		if (dst) {
+			free_pages((unsigned long)dst, 1);
+			per_cpu(zswap_dstmem, cpu) = NULL;
+		}
+		break;
+	default:
+		break;
+	}
+	return NOTIFY_OK;
+}
+
+static int zswap_cpu_notifier(struct notifier_block *nb,
+				unsigned long action, void *pcpu)
+{
+	unsigned long cpu = (unsigned long)pcpu;
+	return __zswap_cpu_notifier(action, cpu);
+}
+
+static struct notifier_block zswap_cpu_notifier_block = {
+	.notifier_call = zswap_cpu_notifier
+};
+
+static int zswap_cpu_init(void)
+{
+	unsigned long cpu;
+
+	get_online_cpus();
+	for_each_online_cpu(cpu)
+		if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
+			goto cleanup;
+	register_cpu_notifier(&zswap_cpu_notifier_block);
+	put_online_cpus();
+	return 0;
+
+cleanup:
+	for_each_online_cpu(cpu)
+		__zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
+	put_online_cpus();
+	return -ENOMEM;
+}
+
+/*********************************
+* zsmalloc callbacks
+**********************************/
+static mempool_t *zswap_page_pool;
+
+static inline unsigned int zswap_max_pool_pages(void)
+{
+	return zswap_max_pool_percent * totalram_pages / 100;
+}
+
+static inline int zswap_page_pool_create(void)
+{
+	/* TODO: dynamically size mempool */
+	zswap_page_pool = mempool_create_page_pool(256, 0);
+	if (!zswap_page_pool)
+		return -ENOMEM;
+	return 0;
+}
+
+static inline void zswap_page_pool_destroy(void)
+{
+	mempool_destroy(zswap_page_pool);
+}
+
+static struct page *zswap_alloc_page(gfp_t flags)
+{
+	struct page *page;
+
+	if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) {
+		zswap_pool_limit_hit++;
+		return NULL;
+	}
+	page = mempool_alloc(zswap_page_pool, flags);
+	if (page)
+		atomic_inc(&zswap_pool_pages);
+	return page;
+}
+
+static void zswap_free_page(struct page *page)
+{
+	if (!page)
+		return;
+	mempool_free(page, zswap_page_pool);
+	atomic_dec(&zswap_pool_pages);
+}
+
+static struct zs_ops zswap_zs_ops = {
+	.alloc = zswap_alloc_page,
+	.free = zswap_free_page
+};
+
+/*********************************
+* frontswap hooks
+**********************************/
+/* attempts to compress and store an single page */
+static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *page)
+{
+	struct zswap_tree *tree = zswap_trees[type];
+	struct zswap_entry *entry, *dupentry;
+	int ret;
+	unsigned int dlen = PAGE_SIZE;
+	unsigned long handle;
+	char *buf;
+	u8 *src, *dst;
+
+	if (!tree) {
+		ret = -ENODEV;
+		goto reject;
+	}
+
+	/* compress */
+	dst = get_cpu_var(zswap_dstmem);
+	src = kmap_atomic(page);
+	ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
+	kunmap_atomic(src);
+	if (ret) {
+		ret = -EINVAL;
+		goto putcpu;
+	}
+	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
+		zswap_reject_compress_poor++;
+		ret = -E2BIG;
+		goto putcpu;
+	}
+
+	/* store */
+	handle = zs_malloc(tree->pool, dlen,
+		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
+			__GFP_NOWARN);
+	if (!handle) {
+		zswap_reject_zsmalloc_fail++;
+		ret = -ENOMEM;
+		goto putcpu;
+	}
+
+	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
+	memcpy(buf, dst, dlen);
+	zs_unmap_object(tree->pool, handle);
+	put_cpu_var(zswap_dstmem);
+
+	/* allocate entry */
+	entry = zswap_entry_cache_alloc(GFP_KERNEL);
+	if (!entry) {
+		zs_free(tree->pool, handle);
+		zswap_reject_kmemcache_fail++;
+		ret = -ENOMEM;
+		goto reject;
+	}
+
+	/* populate entry */
+	entry->type = type;
+	entry->offset = offset;
+	entry->handle = handle;
+	entry->length = dlen;
+
+	/* map */
+	spin_lock(&tree->lock);
+	do {
+		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
+		if (ret == -EEXIST) {
+			zswap_duplicate_entry++;
+
+			/* remove from rbtree */
+			rb_erase(&dupentry->rbnode, &tree->rbroot);
+			
+			/* free */
+			zs_free(tree->pool, dupentry->handle);
+			zswap_entry_cache_free(dupentry);
+			atomic_dec(&zswap_stored_pages);
+		}
+	} while (ret == -EEXIST);
+	spin_unlock(&tree->lock);
+
+	/* update stats */
+	atomic_inc(&zswap_stored_pages);
+
+	return 0;
+
+putcpu:
+	put_cpu_var(zswap_dstmem);
+reject:
+	return ret;
+}
+
+/*
+ * returns 0 if the page was successfully decompressed
+ * return -1 on entry not found or error
+*/
+static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page)
+{
+	struct zswap_tree *tree = zswap_trees[type];
+	struct zswap_entry *entry;
+	u8 *src, *dst;
+	unsigned int dlen;
+
+	/* find */
+	spin_lock(&tree->lock);
+	entry = zswap_rb_search(&tree->rbroot, offset);
+	spin_unlock(&tree->lock);
+
+	/* decompress */
+	dlen = PAGE_SIZE;
+	src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
+	dst = kmap_atomic(page);
+	zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
+		dst, &dlen);
+	kunmap_atomic(dst);
+	zs_unmap_object(tree->pool, entry->handle);
+
+	return 0;
+}
+
+/* invalidates a single page */
+static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
+{
+	struct zswap_tree *tree = zswap_trees[type];
+	struct zswap_entry *entry;
+
+	/* find */
+	spin_lock(&tree->lock);
+	entry = zswap_rb_search(&tree->rbroot, offset);
+
+	/* remove from rbtree */
+	rb_erase(&entry->rbnode, &tree->rbroot);
+	spin_unlock(&tree->lock);
+
+	/* free */
+	zs_free(tree->pool, entry->handle);
+	zswap_entry_cache_free(entry);
+	atomic_dec(&zswap_stored_pages);
+}
+
+/* invalidates all pages for the given swap type */
+static void zswap_frontswap_invalidate_area(unsigned type)
+{
+	struct zswap_tree *tree = zswap_trees[type];
+	struct rb_node *node, *next;
+	struct zswap_entry *entry;
+
+	if (!tree)
+		return;
+
+	/* walk the tree and free everything */
+	spin_lock(&tree->lock);
+	node = rb_first(&tree->rbroot);
+	while (node) {
+		entry = rb_entry(node, struct zswap_entry, rbnode);
+		zs_free(tree->pool, entry->handle);
+		next = rb_next(node);
+		zswap_entry_cache_free(entry);
+		node = next;
+	}
+	tree->rbroot = RB_ROOT;
+	spin_unlock(&tree->lock);
+}
+
+/* NOTE: this is called in atomic context from swapon and must not sleep */
+static void zswap_frontswap_init(unsigned type)
+{
+	struct zswap_tree *tree;
+
+	tree = kzalloc(sizeof(struct zswap_tree), GFP_NOWAIT);
+	if (!tree)
+		goto err;
+	tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops);
+	if (!tree->pool)
+		goto freetree;
+	tree->rbroot = RB_ROOT;
+	spin_lock_init(&tree->lock);
+	zswap_trees[type] = tree;
+	return;
+
+freetree:
+	kfree(tree);
+err:
+	pr_err("zswap: alloc failed, zswap disabled for swap type %d\n", type);
+}
+
+static struct frontswap_ops zswap_frontswap_ops = {
+	.store = zswap_frontswap_store,
+	.load = zswap_frontswap_load,
+	.invalidate_page = zswap_frontswap_invalidate_page,
+	.invalidate_area = zswap_frontswap_invalidate_area,
+	.init = zswap_frontswap_init
+};
+
+/*********************************
+* debugfs functions
+**********************************/
+#ifdef CONFIG_DEBUG_FS
+#include <linux/debugfs.h>
+
+static struct dentry *zswap_debugfs_root;
+
+static int __init zswap_debugfs_init(void)
+{
+	if (!debugfs_initialized())
+		return -ENODEV;
+
+	zswap_debugfs_root = debugfs_create_dir("zswap", NULL);
+	if (!zswap_debugfs_root)
+		return -ENOMEM;
+
+	debugfs_create_u64("pool_limit_hit", S_IRUGO,
+			zswap_debugfs_root, &zswap_pool_limit_hit);
+	debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
+			zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
+	debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
+			zswap_debugfs_root, &zswap_reject_kmemcache_fail);
+	debugfs_create_u64("reject_compress_poor", S_IRUGO,
+			zswap_debugfs_root, &zswap_reject_compress_poor);
+	debugfs_create_u64("duplicate_entry", S_IRUGO,
+			zswap_debugfs_root, &zswap_duplicate_entry);
+	debugfs_create_atomic_t("pool_pages", S_IRUGO,
+			zswap_debugfs_root, &zswap_pool_pages);
+	debugfs_create_atomic_t("stored_pages", S_IRUGO,
+			zswap_debugfs_root, &zswap_stored_pages);
+
+	return 0;
+}
+
+static void __exit zswap_debugfs_exit(void)
+{
+	if (zswap_debugfs_root)
+		debugfs_remove_recursive(zswap_debugfs_root);
+}
+#else
+static inline int __init zswap_debugfs_init(void)
+{
+	return 0;
+}
+
+static inline void __exit zswap_debugfs_exit(void) { }
+#endif
+
+/*********************************
+* module init and exit
+**********************************/
+static int __init init_zswap(void)
+{
+	if (!zswap_enabled)
+		return 0;
+
+	pr_info("loading zswap\n");
+	if (zswap_entry_cache_create()) {
+		pr_err("zswap: entry cache creation failed\n");
+		goto error;
+	}
+	if (zswap_page_pool_create()) {
+		pr_err("zswap: page pool initialization failed\n");
+		goto pagepoolfail;
+	}
+	if (zswap_comp_init()) {
+		pr_err("zswap: compressor initialization failed\n");
+		goto compfail;
+	}
+	if (zswap_cpu_init()) {
+		pr_err("zswap: per-cpu initialization failed\n");
+		goto pcpufail;
+	}
+	frontswap_register_ops(&zswap_frontswap_ops);
+	if (zswap_debugfs_init())
+		pr_warn("zswap: debugfs initialization failed\n");
+	return 0;
+pcpufail:
+	zswap_comp_exit();
+compfail:
+	zswap_page_pool_destroy();
+pagepoolfail:
+	zswap_entry_cache_destory();
+error:
+	return -ENOMEM;
+}
+/* must be late so crypto has time to come up */
+late_initcall(init_zswap);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Seth Jennings <sjenning@linux.vnet.ibm.com>");
+MODULE_DESCRIPTION("Compression backend for frontswap pages");
-- 
1.8.1.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCHv4 4/7] mm: break up swap_writepage() for frontswap backends
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
                   ` (2 preceding siblings ...)
  2013-01-29 21:40 ` [PATCHv4 3/7] zswap: add to mm/ Seth Jennings
@ 2013-01-29 21:40 ` Seth Jennings
  2013-01-29 21:40 ` [PATCHv4 5/7] mm: allow for outstanding swap writeback accounting Seth Jennings
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

swap_writepage() is currently where frontswap hooks into the swap
write path to capture pages with the frontswap_store() function.
However, if a frontswap backend wants to "resume" the writeback of
a page to the swap device, it can't call swap_writepage() as
the page will simply reenter the backend.

This patch separates swap_writepage() into a top and bottom half, the
bottom half named __swap_writepage() to allow a frontswap backend,
like zswap, to resume writeback beyond the frontswap_store() hook.

__add_to_swap_cache() is also made non-static so that the page for
which writeback is to be resumed can be added to the swap cache.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/swap.h |  2 ++
 mm/page_io.c         | 16 +++++++++++++---
 mm/swap_state.c      |  2 +-
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 68df9c1..fc8920d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -321,6 +321,7 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_t ent)
 /* linux/mm/page_io.c */
 extern int swap_readpage(struct page *);
 extern int swap_writepage(struct page *page, struct writeback_control *wbc);
+extern int __swap_writepage(struct page *page, struct writeback_control *wbc);
 extern int swap_set_page_dirty(struct page *page);
 extern void end_swap_bio_read(struct bio *bio, int err);
 
@@ -335,6 +336,7 @@ extern struct address_space swapper_space;
 extern void show_swap_cache_info(void);
 extern int add_to_swap(struct page *);
 extern int add_to_swap_cache(struct page *, swp_entry_t, gfp_t);
+extern int __add_to_swap_cache(struct page *page, swp_entry_t entry);
 extern void __delete_from_swap_cache(struct page *);
 extern void delete_from_swap_cache(struct page *);
 extern void free_page_and_swap_cache(struct page *);
diff --git a/mm/page_io.c b/mm/page_io.c
index 78eee32..1cb382d 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -179,15 +179,15 @@ bad_bmap:
 	goto out;
 }
 
+int __swap_writepage(struct page *page, struct writeback_control *wbc);
+
 /*
  * We may have stale swap cache pages in memory: notice
  * them here and get rid of the unnecessary final write.
  */
 int swap_writepage(struct page *page, struct writeback_control *wbc)
 {
-	struct bio *bio;
-	int ret = 0, rw = WRITE;
-	struct swap_info_struct *sis = page_swap_info(page);
+	int ret = 0;
 
 	if (try_to_free_swap(page)) {
 		unlock_page(page);
@@ -199,6 +199,16 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
 		end_page_writeback(page);
 		goto out;
 	}
+	ret = __swap_writepage(page, wbc);
+out:
+	return ret;
+}
+
+int __swap_writepage(struct page *page, struct writeback_control *wbc)
+{
+	struct bio *bio;
+	int ret = 0, rw = WRITE;
+	struct swap_info_struct *sis = page_swap_info(page);
 
 	if (sis->flags & SWP_FILE) {
 		struct kiocb kiocb;
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 0cb36fb..7eded9c 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -67,7 +67,7 @@ void show_swap_cache_info(void)
  * __add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,
  * but sets SwapCache flag and private instead of mapping and index.
  */
-static int __add_to_swap_cache(struct page *page, swp_entry_t entry)
+int __add_to_swap_cache(struct page *page, swp_entry_t entry)
 {
 	int error;
 
-- 
1.8.1.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCHv4 5/7] mm: allow for outstanding swap writeback accounting
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
                   ` (3 preceding siblings ...)
  2013-01-29 21:40 ` [PATCHv4 4/7] mm: break up swap_writepage() for frontswap backends Seth Jennings
@ 2013-01-29 21:40 ` Seth Jennings
  2013-01-29 21:40 ` [PATCHv4 6/7] zswap: add flushing support Seth Jennings
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

To prevent flooding the swap device with writebacks, frontswap
backends need to count and limit the number of outstanding
writebacks.  The incrementing of the counter can be done before
the call to __swap_writepage().  However, the caller must receive
a notification when the writeback completes in order to decrement
the counter.

To achieve this functionality, this patch modifies
__swap_writepage() to take the bio completion callback function
as an argument.

end_swap_bio_write(), the normal bio completion function, is also
made non-static so that code doing the accounting can call it
after the accounting is done.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Minchan Kim <minchan@kernel.org>
---
 include/linux/swap.h |  4 +++-
 mm/page_io.c         | 12 +++++++-----
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index fc8920d..98981f0 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -321,7 +321,9 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_t ent)
 /* linux/mm/page_io.c */
 extern int swap_readpage(struct page *);
 extern int swap_writepage(struct page *page, struct writeback_control *wbc);
-extern int __swap_writepage(struct page *page, struct writeback_control *wbc);
+extern void end_swap_bio_write(struct bio *bio, int err);
+extern int __swap_writepage(struct page *page, struct writeback_control *wbc,
+	void (*end_write_func)(struct bio *, int));
 extern int swap_set_page_dirty(struct page *page);
 extern void end_swap_bio_read(struct bio *bio, int err);
 
diff --git a/mm/page_io.c b/mm/page_io.c
index 1cb382d..56276fe 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -42,7 +42,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags,
 	return bio;
 }
 
-static void end_swap_bio_write(struct bio *bio, int err)
+void end_swap_bio_write(struct bio *bio, int err)
 {
 	const int uptodate = test_bit(BIO_UPTODATE, &bio->bi_flags);
 	struct page *page = bio->bi_io_vec[0].bv_page;
@@ -179,7 +179,8 @@ bad_bmap:
 	goto out;
 }
 
-int __swap_writepage(struct page *page, struct writeback_control *wbc);
+int __swap_writepage(struct page *page, struct writeback_control *wbc,
+	void (*end_write_func)(struct bio *, int));
 
 /*
  * We may have stale swap cache pages in memory: notice
@@ -199,12 +200,13 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
 		end_page_writeback(page);
 		goto out;
 	}
-	ret = __swap_writepage(page, wbc);
+	ret = __swap_writepage(page, wbc, end_swap_bio_write);
 out:
 	return ret;
 }
 
-int __swap_writepage(struct page *page, struct writeback_control *wbc)
+int __swap_writepage(struct page *page, struct writeback_control *wbc,
+	void (*end_write_func)(struct bio *, int))
 {
 	struct bio *bio;
 	int ret = 0, rw = WRITE;
@@ -236,7 +238,7 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc)
 		return ret;
 	}
 
-	bio = get_swap_bio(GFP_NOIO, page, end_swap_bio_write);
+	bio = get_swap_bio(GFP_NOIO, page, end_write_func);
 	if (bio == NULL) {
 		set_page_dirty(page);
 		unlock_page(page);
-- 
1.8.1.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCHv4 6/7] zswap: add flushing support
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
                   ` (4 preceding siblings ...)
  2013-01-29 21:40 ` [PATCHv4 5/7] mm: allow for outstanding swap writeback accounting Seth Jennings
@ 2013-01-29 21:40 ` Seth Jennings
  2013-01-29 23:03   ` Andrew Morton
  2013-02-01  7:27   ` Minchan Kim
  2013-01-29 21:40 ` [PATCHv4 7/7] zswap: add documentation Seth Jennings
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

This patchset adds support for flush pages out of the compressed
pool to the swap device

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
---
 mm/zswap.c | 451 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 434 insertions(+), 17 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index a6c2928..b8e5673 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -34,6 +34,12 @@
 #include <linux/mempool.h>
 #include <linux/zsmalloc.h>
 
+#include <linux/mm_types.h>
+#include <linux/page-flags.h>
+#include <linux/swapops.h>
+#include <linux/writeback.h>
+#include <linux/pagemap.h>
+
 /*********************************
 * statistics
 **********************************/
@@ -41,6 +47,8 @@
 static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
 /* The number of compressed pages currently stored in zswap */
 static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
+/* The number of outstanding pages awaiting writeback */
+static atomic_t zswap_outstanding_flushes = ATOMIC_INIT(0);
 
 /*
  * The statistics below are not protected from concurrent access for
@@ -49,9 +57,14 @@ static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
  * certain event is occurring.
 */
 static u64 zswap_pool_limit_hit;
+static u64 zswap_flushed_pages;
 static u64 zswap_reject_compress_poor;
+static u64 zswap_flush_attempted;
+static u64 zswap_reject_tmppage_fail;
+static u64 zswap_reject_flush_fail;
 static u64 zswap_reject_zsmalloc_fail;
 static u64 zswap_reject_kmemcache_fail;
+static u64 zswap_saved_by_flush;
 static u64 zswap_duplicate_entry;
 
 /*********************************
@@ -80,6 +93,14 @@ static unsigned int zswap_max_compression_ratio = 80;
 module_param_named(max_compression_ratio,
 			zswap_max_compression_ratio, uint, 0644);
 
+/*
+ * Maximum number of outstanding flushes allowed at any given time.
+ * This is to prevent decompressing an unbounded number of compressed
+ * pages into the swap cache all at once, and to help with writeback
+ * congestion.
+*/
+#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
+
 /*********************************
 * compression functions
 **********************************/
@@ -145,14 +166,23 @@ static void zswap_comp_exit(void)
 **********************************/
 struct zswap_entry {
 	struct rb_node rbnode;
+	struct list_head lru;
+	int refcount;
 	unsigned type;
 	pgoff_t offset;
 	unsigned long handle;
 	unsigned int length;
 };
 
+/*
+ * The tree lock in the zswap_tree struct protects a few things:
+ * - the rbtree
+ * - the lru list
+ * - the refcount field of each entry in the tree
+ */
 struct zswap_tree {
 	struct rb_root rbroot;
+	struct list_head lru;
 	spinlock_t lock;
 	struct zs_pool *pool;
 };
@@ -184,6 +214,8 @@ static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
 	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
 	if (!entry)
 		return NULL;
+	INIT_LIST_HEAD(&entry->lru);
+	entry->refcount = 1;
 	return entry;
 }
 
@@ -192,6 +224,17 @@ static inline void zswap_entry_cache_free(struct zswap_entry *entry)
 	kmem_cache_free(zswap_entry_cache, entry);
 }
 
+static inline void zswap_entry_get(struct zswap_entry *entry)
+{
+	entry->refcount++;
+}
+
+static inline int zswap_entry_put(struct zswap_entry *entry)
+{
+	entry->refcount--;
+	return entry->refcount;
+}
+
 /*********************************
 * rbtree functions
 **********************************/
@@ -367,6 +410,278 @@ static struct zs_ops zswap_zs_ops = {
 };
 
 /*********************************
+* flush code
+**********************************/
+static void zswap_end_swap_write(struct bio *bio, int err)
+{
+	end_swap_bio_write(bio, err);
+	atomic_dec(&zswap_outstanding_flushes);
+	zswap_flushed_pages++;
+}
+
+/*
+ * zswap_get_swap_cache_page
+ *
+ * This is an adaption of read_swap_cache_async()
+ *
+ * If success, page is returned in retpage
+ * Returns 0 if page was already in the swap cache, page is not locked
+ * Returns 1 if the new page needs to be populated, page is locked
+ */
+static int zswap_get_swap_cache_page(swp_entry_t entry,
+				struct page **retpage)
+{
+	struct page *found_page, *new_page = NULL;
+	int err;
+
+	*retpage = NULL;
+	do {
+		/*
+		 * First check the swap cache.  Since this is normally
+		 * called after lookup_swap_cache() failed, re-calling
+		 * that would confuse statistics.
+		 */
+		found_page = find_get_page(&swapper_space, entry.val);
+		if (found_page)
+			break;
+
+		/*
+		 * Get a new page to read into from swap.
+		 */
+		if (!new_page) {
+			new_page = alloc_page(GFP_KERNEL);
+			if (!new_page)
+				break; /* Out of memory */
+		}
+
+		/*
+		 * call radix_tree_preload() while we can wait.
+		 */
+		err = radix_tree_preload(GFP_KERNEL);
+		if (err)
+			break;
+
+		/*
+		 * Swap entry may have been freed since our caller observed it.
+		 */
+		err = swapcache_prepare(entry);
+		if (err == -EEXIST) { /* seems racy */
+			radix_tree_preload_end();
+			continue;
+		}
+		if (err) { /* swp entry is obsolete ? */
+			radix_tree_preload_end();
+			break;
+		}
+
+		/* May fail (-ENOMEM) if radix-tree node allocation failed. */
+		__set_page_locked(new_page);
+		SetPageSwapBacked(new_page);
+		err = __add_to_swap_cache(new_page, entry);
+		if (likely(!err)) {
+			radix_tree_preload_end();
+			lru_cache_add_anon(new_page);
+			*retpage = new_page;
+			return 1;
+		}
+		radix_tree_preload_end();
+		ClearPageSwapBacked(new_page);
+		__clear_page_locked(new_page);
+		/*
+		 * add_to_swap_cache() doesn't return -EEXIST, so we can safely
+		 * clear SWAP_HAS_CACHE flag.
+		 */
+		swapcache_free(entry, NULL);
+	} while (err != -ENOMEM);
+
+	if (new_page)
+		page_cache_release(new_page);
+	if (!found_page)
+		return -ENOMEM;
+	*retpage = found_page;
+	return 0;
+}
+
+static int zswap_flush_entry(struct zswap_entry *entry)
+{
+	unsigned long type = entry->type;
+	struct zswap_tree *tree = zswap_trees[type];
+	struct page *page;
+	swp_entry_t swpentry;
+	u8 *src, *dst;
+	unsigned int dlen;
+	int ret, refcount;
+	struct writeback_control wbc = {
+		.sync_mode = WB_SYNC_NONE,
+	};
+
+	/* get/allocate page in the swap cache */
+	swpentry = swp_entry(type, entry->offset);
+	ret = zswap_get_swap_cache_page(swpentry, &page);
+	if (ret < 0)
+		return ret;
+	else if (ret) {
+		/* decompress */
+		dlen = PAGE_SIZE;
+		src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
+		dst = kmap_atomic(page);
+		ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
+				dst, &dlen);
+		kunmap_atomic(dst);
+		zs_unmap_object(tree->pool, entry->handle);
+		BUG_ON(ret);
+		BUG_ON(dlen != PAGE_SIZE);
+		SetPageUptodate(page);
+	} else {
+		/* page is already in the swap cache, ignore for now */
+		spin_lock(&tree->lock);
+		refcount = zswap_entry_put(entry);
+		spin_unlock(&tree->lock);
+
+		if (likely(refcount))
+			return 0;
+
+		/* if the refcount is zero, invalidate must have come in */
+		/* free */
+		zs_free(tree->pool, entry->handle);
+		zswap_entry_cache_free(entry);
+		atomic_dec(&zswap_stored_pages);
+
+		return 0;
+	}
+
+	/* start writeback */
+	SetPageReclaim(page);
+	/*
+	 * Return value is ignored here because it doesn't change anything
+	 * for us.  Page is returned unlocked.
+	 */
+	(void)__swap_writepage(page, &wbc, zswap_end_swap_write);
+	page_cache_release(page);
+	atomic_inc(&zswap_outstanding_flushes);
+
+	/* remove */
+	spin_lock(&tree->lock);
+	refcount = zswap_entry_put(entry);
+	if (refcount > 1) {
+		/* load in progress, load will free */
+		spin_unlock(&tree->lock);
+		return 0;
+	}
+	if (refcount == 1)
+		/* no invalidate yet, remove from rbtree */
+		rb_erase(&entry->rbnode, &tree->rbroot);
+	spin_unlock(&tree->lock);
+
+	/* free */
+	zs_free(tree->pool, entry->handle);
+	zswap_entry_cache_free(entry);
+	atomic_dec(&zswap_stored_pages);
+
+	return 0;
+}
+
+static void zswap_flush_entries(unsigned type, int nr)
+{
+	struct zswap_tree *tree = zswap_trees[type];
+	struct zswap_entry *entry;
+	int i, ret;
+
+/*
+ * This limits is arbitrary for now until a better
+ * policy can be implemented. This is so we don't
+ * eat all of RAM decompressing pages for writeback.
+ */
+	if (atomic_read(&zswap_outstanding_flushes) >
+		ZSWAP_MAX_OUTSTANDING_FLUSHES)
+		return;
+
+	for (i = 0; i < nr; i++) {
+		/* dequeue from lru */
+		spin_lock(&tree->lock);
+		if (list_empty(&tree->lru)) {
+			spin_unlock(&tree->lock);
+			break;
+		}
+		entry = list_first_entry(&tree->lru,
+				struct zswap_entry, lru);
+		list_del(&entry->lru);
+		zswap_entry_get(entry);
+		spin_unlock(&tree->lock);
+		ret = zswap_flush_entry(entry);
+		if (ret) {
+			/* put back on the lru */
+			spin_lock(&tree->lock);
+			list_add(&entry->lru, &tree->lru);
+			spin_unlock(&tree->lock);
+		} else {
+			if (atomic_read(&zswap_outstanding_flushes) >
+				ZSWAP_MAX_OUTSTANDING_FLUSHES)
+				break;
+		}
+	}
+}
+
+/*******************************************
+* page pool for temporary compression result
+********************************************/
+#define ZSWAP_TMPPAGE_POOL_PAGES 16
+static LIST_HEAD(zswap_tmppage_list);
+static DEFINE_SPINLOCK(zswap_tmppage_lock);
+
+static void zswap_tmppage_pool_destroy(void)
+{
+	struct page *page, *tmppage;
+
+	spin_lock(&zswap_tmppage_lock);
+	list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) {
+		list_del(&page->lru);
+		__free_pages(page, 1);
+	}
+	spin_unlock(&zswap_tmppage_lock);
+}
+
+static int zswap_tmppage_pool_create(void)
+{
+	int i;
+	struct page *page;
+
+	for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) {
+		page = alloc_pages(GFP_KERNEL, 1);
+		if (!page) {
+			zswap_tmppage_pool_destroy();
+			return -ENOMEM;
+		}
+		spin_lock(&zswap_tmppage_lock);
+		list_add(&page->lru, &zswap_tmppage_list);
+		spin_unlock(&zswap_tmppage_lock);
+	}
+	return 0;
+}
+
+static inline struct page *zswap_tmppage_alloc(void)
+{
+	struct page *page;
+
+	spin_lock(&zswap_tmppage_lock);
+	if (list_empty(&zswap_tmppage_list)) {
+		spin_unlock(&zswap_tmppage_lock);
+		return NULL;
+	}
+	page = list_first_entry(&zswap_tmppage_list, struct page, lru);
+	list_del(&page->lru);
+	spin_unlock(&zswap_tmppage_lock);
+	return page;
+}
+
+static inline void zswap_tmppage_free(struct page *page)
+{
+	spin_lock(&zswap_tmppage_lock);
+	list_add(&page->lru, &zswap_tmppage_list);
+	spin_unlock(&zswap_tmppage_lock);
+}
+
+/*********************************
 * frontswap hooks
 **********************************/
 /* attempts to compress and store an single page */
@@ -378,7 +693,9 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
 	unsigned int dlen = PAGE_SIZE;
 	unsigned long handle;
 	char *buf;
-	u8 *src, *dst;
+	u8 *src, *dst, *tmpdst;
+	struct page *tmppage;
+	bool flush_attempted = 0;
 
 	if (!tree) {
 		ret = -ENODEV;
@@ -392,12 +709,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
 	kunmap_atomic(src);
 	if (ret) {
 		ret = -EINVAL;
-		goto putcpu;
+		goto freepage;
 	}
 	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
 		zswap_reject_compress_poor++;
 		ret = -E2BIG;
-		goto putcpu;
+		goto freepage;
 	}
 
 	/* store */
@@ -405,15 +722,46 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
 		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
 			__GFP_NOWARN);
 	if (!handle) {
-		zswap_reject_zsmalloc_fail++;
-		ret = -ENOMEM;
-		goto putcpu;
+		zswap_flush_attempted++;
+		/*
+		 * Copy compressed buffer out of per-cpu storage so
+		 * we can re-enable preemption.
+		*/
+		tmppage = zswap_tmppage_alloc();
+		if (!tmppage) {
+			zswap_reject_tmppage_fail++;
+			ret = -ENOMEM;
+			goto freepage;
+		}
+		flush_attempted = 1;
+		tmpdst = page_address(tmppage);
+		memcpy(tmpdst, dst, dlen);
+		dst = tmpdst;
+		put_cpu_var(zswap_dstmem);
+
+		/* try to free up some space */
+		/* TODO: replace with more targeted policy */
+		zswap_flush_entries(type, 16);
+		/* try again, allowing wait */
+		handle = zs_malloc(tree->pool, dlen,
+			__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
+				__GFP_NOWARN);
+		if (!handle) {
+			/* still no space, fail */
+			zswap_reject_zsmalloc_fail++;
+			ret = -ENOMEM;
+			goto freepage;
+		}
+		zswap_saved_by_flush++;
 	}
 
 	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
 	memcpy(buf, dst, dlen);
 	zs_unmap_object(tree->pool, handle);
-	put_cpu_var(zswap_dstmem);
+	if (flush_attempted)
+		zswap_tmppage_free(tmppage);
+	else
+		put_cpu_var(zswap_dstmem);
 
 	/* allocate entry */
 	entry = zswap_entry_cache_alloc(GFP_KERNEL);
@@ -436,16 +784,19 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
 		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
 		if (ret == -EEXIST) {
 			zswap_duplicate_entry++;
-
-			/* remove from rbtree */
+			/* remove from rbtree and lru */
 			rb_erase(&dupentry->rbnode, &tree->rbroot);
-			
-			/* free */
-			zs_free(tree->pool, dupentry->handle);
-			zswap_entry_cache_free(dupentry);
-			atomic_dec(&zswap_stored_pages);
+			if (dupentry->lru.next != LIST_POISON1)
+				list_del(&dupentry->lru);
+			if (!zswap_entry_put(dupentry)) {
+				/* free */
+				zs_free(tree->pool, dupentry->handle);
+				zswap_entry_cache_free(dupentry);
+				atomic_dec(&zswap_stored_pages);
+			}
 		}
 	} while (ret == -EEXIST);
+	list_add_tail(&entry->lru, &tree->lru);
 	spin_unlock(&tree->lock);
 
 	/* update stats */
@@ -453,8 +804,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
 
 	return 0;
 
-putcpu:
-	put_cpu_var(zswap_dstmem);
+freepage:
+	if (flush_attempted)
+		zswap_tmppage_free(tmppage);
+	else
+		put_cpu_var(zswap_dstmem);
 reject:
 	return ret;
 }
@@ -469,10 +823,21 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page
 	struct zswap_entry *entry;
 	u8 *src, *dst;
 	unsigned int dlen;
+	int refcount;
 
 	/* find */
 	spin_lock(&tree->lock);
 	entry = zswap_rb_search(&tree->rbroot, offset);
+	if (!entry) {
+		/* entry was flushed */
+		spin_unlock(&tree->lock);
+		return -1;
+	}
+	zswap_entry_get(entry);
+
+	/* remove from lru */
+	if (entry->lru.next != LIST_POISON1)
+		list_del(&entry->lru);
 	spin_unlock(&tree->lock);
 
 	/* decompress */
@@ -484,6 +849,25 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page
 	kunmap_atomic(dst);
 	zs_unmap_object(tree->pool, entry->handle);
 
+	spin_lock(&tree->lock);
+	refcount = zswap_entry_put(entry);
+	if (likely(refcount)) {
+		list_add_tail(&entry->lru, &tree->lru);
+		spin_unlock(&tree->lock);
+		return 0;
+	}
+	spin_unlock(&tree->lock);
+
+	/*
+	 * We don't have to unlink from the rbtree because zswap_flush_entry()
+	 * or zswap_frontswap_invalidate page() has already done this for us if we
+	 * are the last reference.
+	 */
+	/* free */
+	zs_free(tree->pool, entry->handle);
+	zswap_entry_cache_free(entry);
+	atomic_dec(&zswap_stored_pages);
+
 	return 0;
 }
 
@@ -492,14 +876,27 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
 {
 	struct zswap_tree *tree = zswap_trees[type];
 	struct zswap_entry *entry;
+	int refcount;
 
 	/* find */
 	spin_lock(&tree->lock);
 	entry = zswap_rb_search(&tree->rbroot, offset);
+	if (!entry) {
+		/* entry was flushed */
+		spin_unlock(&tree->lock);
+		return;
+	}
 
-	/* remove from rbtree */
+	/* remove from rbtree and lru */
 	rb_erase(&entry->rbnode, &tree->rbroot);
+	if (entry->lru.next != LIST_POISON1)
+		list_del(&entry->lru);
+	refcount = zswap_entry_put(entry);
 	spin_unlock(&tree->lock);
+	if (refcount) {
+		/* must be flushing */
+		return;
+	}
 
 	/* free */
 	zs_free(tree->pool, entry->handle);
@@ -528,6 +925,7 @@ static void zswap_frontswap_invalidate_area(unsigned type)
 		node = next;
 	}
 	tree->rbroot = RB_ROOT;
+	INIT_LIST_HEAD(&tree->lru);
 	spin_unlock(&tree->lock);
 }
 
@@ -543,6 +941,7 @@ static void zswap_frontswap_init(unsigned type)
 	if (!tree->pool)
 		goto freetree;
 	tree->rbroot = RB_ROOT;
+	INIT_LIST_HEAD(&tree->lru);
 	spin_lock_init(&tree->lock);
 	zswap_trees[type] = tree;
 	return;
@@ -578,20 +977,32 @@ static int __init zswap_debugfs_init(void)
 	if (!zswap_debugfs_root)
 		return -ENOMEM;
 
+	debugfs_create_u64("saved_by_flush", S_IRUGO,
+			zswap_debugfs_root, &zswap_saved_by_flush);
 	debugfs_create_u64("pool_limit_hit", S_IRUGO,
 			zswap_debugfs_root, &zswap_pool_limit_hit);
+	debugfs_create_u64("reject_flush_attempted", S_IRUGO,
+			zswap_debugfs_root, &zswap_flush_attempted);
+	debugfs_create_u64("reject_tmppage_fail", S_IRUGO,
+			zswap_debugfs_root, &zswap_reject_tmppage_fail);
+	debugfs_create_u64("reject_flush_fail", S_IRUGO,
+			zswap_debugfs_root, &zswap_reject_flush_fail);
 	debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
 			zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
 	debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
 			zswap_debugfs_root, &zswap_reject_kmemcache_fail);
 	debugfs_create_u64("reject_compress_poor", S_IRUGO,
 			zswap_debugfs_root, &zswap_reject_compress_poor);
+	debugfs_create_u64("flushed_pages", S_IRUGO,
+			zswap_debugfs_root, &zswap_flushed_pages);
 	debugfs_create_u64("duplicate_entry", S_IRUGO,
 			zswap_debugfs_root, &zswap_duplicate_entry);
 	debugfs_create_atomic_t("pool_pages", S_IRUGO,
 			zswap_debugfs_root, &zswap_pool_pages);
 	debugfs_create_atomic_t("stored_pages", S_IRUGO,
 			zswap_debugfs_root, &zswap_stored_pages);
+	debugfs_create_atomic_t("outstanding_flushes", S_IRUGO,
+			zswap_debugfs_root, &zswap_outstanding_flushes);
 
 	return 0;
 }
@@ -627,6 +1038,10 @@ static int __init init_zswap(void)
 		pr_err("zswap: page pool initialization failed\n");
 		goto pagepoolfail;
 	}
+	if (zswap_tmppage_pool_create()) {
+		pr_err("zswap: workmem pool initialization failed\n");
+		goto tmppoolfail;
+	}
 	if (zswap_comp_init()) {
 		pr_err("zswap: compressor initialization failed\n");
 		goto compfail;
@@ -642,6 +1057,8 @@ static int __init init_zswap(void)
 pcpufail:
 	zswap_comp_exit();
 compfail:
+	zswap_tmppage_pool_destroy();
+tmppoolfail:
 	zswap_page_pool_destroy();
 pagepoolfail:
 	zswap_entry_cache_destory();
-- 
1.8.1.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCHv4 7/7] zswap: add documentation
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
                   ` (5 preceding siblings ...)
  2013-01-29 21:40 ` [PATCHv4 6/7] zswap: add flushing support Seth Jennings
@ 2013-01-29 21:40 ` Seth Jennings
  2013-01-29 23:07   ` Andrew Morton
  2013-01-29 22:14 ` [PATCHv4 0/7] zswap: compressed swap caching Joe Perches
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 21:40 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Seth Jennings, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

This patch adds the documentation file for the zswap functionality

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
---
 Documentation/vm/zswap.txt | 73 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)
 create mode 100644 Documentation/vm/zswap.txt

diff --git a/Documentation/vm/zswap.txt b/Documentation/vm/zswap.txt
new file mode 100644
index 0000000..5d00ce9
--- /dev/null
+++ b/Documentation/vm/zswap.txt
@@ -0,0 +1,73 @@
+Overview:
+
+Zswap is a lightweight compressed cache for swap pages. It takes
+pages that are in the process of being swapped out and attempts to
+compress them into a dynamically allocated RAM-based memory pool.
+If this process is successful, the writeback to the swap device is
+deferred and, in many cases, avoided completely.  This results in
+a significant I/O reduction and performance gains for systems that
+are swapping.
+
+Zswap provides compressed swap caching that basically trades CPU cycles
+for reduced swap I/O.  This trade-off can result in a significant
+performance improvement as reads to/writes from to the compressed
+cache almost always faster that reading from a swap device
+which incurs the latency of an asynchronous block I/O read.
+
+Some potential benefits:
+* Desktop/laptop users with limited RAM capacities can mitigate the
+    performance impact of swapping.
+* Overcommitted guests that share a common I/O resource can
+    dramatically reduce their swap I/O pressure, avoiding heavy
+    handed I/O throttling by the hypervisor.  This allows more work
+    to get done with less impact to the guest workload and guests
+    sharing the I/O subsystem
+* Users with SSDs as swap devices can extend the life of the device by
+    drastically reducing life-shortening writes.
+
+Zswap evicts pages from compressed cache on an LRU basis to the backing
+swap device when the compress pool reaches it size limit or the pool is
+unable to obtain additional pages from the buddy allocator.  This
+requirement had been identified in prior community discussions.
+
+To enabled zswap, the "enabled" attribute must be set to 1 at boot time.
+e.g. zswap.enabled=1
+
+Design:
+
+Zswap receives pages for compression through the Frontswap API and
+is able to evict pages from its own compressed pool on an LRU basis
+and write them back to the backing swap device in the case that the
+compressed pool is full or unable to secure additional pages from
+the buddy allocator.
+
+Zswap makes use of zsmalloc for the managing the compressed memory
+pool.  This is because zsmalloc is specifically designed to minimize
+fragmentation on large (> PAGE_SIZE/2) allocation sizes.  Each
+allocation in zsmalloc is not directly accessible by address.
+Rather, a handle is return by the allocation routine and that handle
+must be mapped before being accessed.  The compressed memory pool grows
+on demand and shrinks as compressed pages are freed.  The pool is
+not preallocated.
+
+When a swap page is passed from frontswap to zswap, zswap maintains
+a mapping of the swap entry, a combination of the swap type and swap
+offset, to the zsmalloc handle that references that compressed swap
+page.  This mapping is achieved with a red-black tree per swap type.
+The swap offset is the search key for the tree nodes.
+
+Zswap seeks to be simple in its policies.  Sysfs attributes allow for
+two user controlled policies:
+* max_compression_ratio - Maximum compression ratio, as as percentage,
+    for an acceptable compressed page. Any page that does not compress
+    by at least this ratio will be rejected.
+* max_pool_percent - The maximum percentage of memory that the compressed
+    pool can occupy.
+
+Zswap allows the compressor to be selected at kernel boot time by
+setting the “compressor” attribute.  The default compressor is lzo.
+e.g. zswap.compressor=deflate
+
+A debugfs interface is provided for various statistic about pool size,
+number of pages stored, and various counters for the reasons pages
+are rejected.
-- 
1.8.1.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
                   ` (6 preceding siblings ...)
  2013-01-29 21:40 ` [PATCHv4 7/7] zswap: add documentation Seth Jennings
@ 2013-01-29 22:14 ` Joe Perches
  2013-01-29 22:49   ` Seth Jennings
  2013-02-01  1:39 ` Simon Jeons
       [not found] ` <5110287A.5050200@linux.vnet.ibm.com>
  9 siblings, 1 reply; 34+ messages in thread
From: Joe Perches @ 2013-01-29 22:14 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
> The code required for the flushing is in a separate patch now
> as requested.

What tree does this apply to?
Both -next and linus fail to compile.

There's a whitespace error applying 3/7 (line 543 in zswap.c)
and on 3.8-rc5 (allyesconfig x86-32):

  CC      mm/zswap.o
mm/zswap.c:407:15: error: variable ‘zswap_zs_ops’ has initializer but incomplete type
mm/zswap.c:408:2: error: unknown field ‘alloc’ specified in initializer
mm/zswap.c:408:2: warning: excess elements in struct initializer [enabled by default]
mm/zswap.c:408:2: warning: (near initialization for ‘zswap_zs_ops’) [enabled by default]
mm/zswap.c:409:2: error: unknown field ‘free’ specified in initializer
mm/zswap.c:410:1: warning: excess elements in struct initializer [enabled by default]
mm/zswap.c:410:1: warning: (near initialization for ‘zswap_zs_ops’) [enabled by default]
mm/zswap.c: In function ‘zswap_frontswap_store’:
mm/zswap.c:723:4: error: too many arguments to function ‘zs_malloc’
In file included from mm/zswap.c:35:0:
include/linux/zsmalloc.h:34:15: note: declared here
mm/zswap.c:748:5: error: too many arguments to function ‘zs_malloc’
In file included from mm/zswap.c:35:0:
include/linux/zsmalloc.h:34:15: note: declared here
mm/zswap.c: In function ‘zswap_frontswap_init’:
mm/zswap.c:940:2: warning: passing argument 2 of ‘zs_create_pool’ makes integer from pointer without a cast [enabled by default]
In file included from mm/zswap.c:35:0:
include/linux/zsmalloc.h:31:17: note: expected ‘gfp_t’ but argument is of type ‘struct zs_ops *’
make[1]: *** [mm/zswap.o] Error 1
make: *** [mm/zswap.o] Error 2

I also suggest this patch to use a more
current logging style via pr_fmt and
removing embedded "zswap: " prefixes from
logging output formats.

---
 mm/zswap.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index b8e5673..1e21f46 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1,5 +1,5 @@
 /*
- * zswap-drv.c - zswap driver file
+ * zswap.c - zswap driver file
  *
  * zswap is a backend for frontswap that takes pages that are in the
  * process of being swapped out and attempts to compress them and store
@@ -20,6 +20,8 @@
  * GNU General Public License for more details.
 */
 
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
 #include <linux/module.h>
 #include <linux/cpu.h>
 #include <linux/highmem.h>
@@ -137,15 +139,14 @@ static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
 static int __init zswap_comp_init(void)
 {
 	if (!crypto_has_comp(zswap_compressor, 0, 0)) {
-		pr_info("zswap: %s compressor not available\n",
-			zswap_compressor);
+		pr_info("%s compressor not available\n", zswap_compressor);
 		/* fall back to default compressor */
 		zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
 		if (!crypto_has_comp(zswap_compressor, 0, 0))
 			/* can't even load the default compressor */
 			return -ENODEV;
 	}
-	pr_info("zswap: using %s compressor\n", zswap_compressor);
+	pr_info("using %s compressor\n", zswap_compressor);
 
 	/* alloc percpu transforms */
 	zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
@@ -296,13 +297,13 @@ static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
 	case CPU_UP_PREPARE:
 		tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
 		if (IS_ERR(tfm)) {
-			pr_err("zswap: can't allocate compressor transform\n");
+			pr_err("can't allocate compressor transform\n");
 			return NOTIFY_BAD;
 		}
 		*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
 		dst = (u8 *)__get_free_pages(GFP_KERNEL, 1);
 		if (!dst) {
-			pr_err("zswap: can't allocate compressor buffer\n");
+			pr_err("can't allocate compressor buffer\n");
 			crypto_free_comp(tfm);
 			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
 			return NOTIFY_BAD;
@@ -949,7 +950,7 @@ static void zswap_frontswap_init(unsigned type)
 freetree:
 	kfree(tree);
 err:
-	pr_err("zswap: alloc failed, zswap disabled for swap type %d\n", type);
+	pr_err("alloc failed, zswap disabled for swap type %d\n", type);
 }
 
 static struct frontswap_ops zswap_frontswap_ops = {
@@ -1031,28 +1032,28 @@ static int __init init_zswap(void)
 
 	pr_info("loading zswap\n");
 	if (zswap_entry_cache_create()) {
-		pr_err("zswap: entry cache creation failed\n");
+		pr_err("entry cache creation failed\n");
 		goto error;
 	}
 	if (zswap_page_pool_create()) {
-		pr_err("zswap: page pool initialization failed\n");
+		pr_err("page pool initialization failed\n");
 		goto pagepoolfail;
 	}
 	if (zswap_tmppage_pool_create()) {
-		pr_err("zswap: workmem pool initialization failed\n");
+		pr_err("workmem pool initialization failed\n");
 		goto tmppoolfail;
 	}
 	if (zswap_comp_init()) {
-		pr_err("zswap: compressor initialization failed\n");
+		pr_err("compressor initialization failed\n");
 		goto compfail;
 	}
 	if (zswap_cpu_init()) {
-		pr_err("zswap: per-cpu initialization failed\n");
+		pr_err("per-cpu initialization failed\n");
 		goto pcpufail;
 	}
 	frontswap_register_ops(&zswap_frontswap_ops);
 	if (zswap_debugfs_init())
-		pr_warn("zswap: debugfs initialization failed\n");
+		pr_warn("debugfs initialization failed\n");
 	return 0;
 pcpufail:
 	zswap_comp_exit();



^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-01-29 22:14 ` [PATCHv4 0/7] zswap: compressed swap caching Joe Perches
@ 2013-01-29 22:49   ` Seth Jennings
  2013-01-30  4:32     ` Minchan Kim
  0 siblings, 1 reply; 34+ messages in thread
From: Seth Jennings @ 2013-01-29 22:49 UTC (permalink / raw)
  To: Joe Perches
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 01/29/2013 04:14 PM, Joe Perches wrote:
> On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
>> The code required for the flushing is in a separate patch now
>> as requested.
> 
> What tree does this apply to?
> Both -next and linus fail to compile.

Link to build instruction in the cover letter:

>> NOTE: To build, read this:
>> http://lkml.org/lkml/2013/1/28/586

The complexity is due to a conflict with a zsmalloc patch in Greg's
staging tree that has yet to make its way upstream.

Sorry for the inconvenience.

Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 2/7] zsmalloc: promote to lib/
  2013-01-29 21:40 ` [PATCHv4 2/7] zsmalloc: promote to lib/ Seth Jennings
@ 2013-01-29 22:51   ` Andrew Morton
  2013-01-30 16:28     ` Seth Jennings
  2013-02-13 16:00     ` Seth Jennings
  0 siblings, 2 replies; 34+ messages in thread
From: Andrew Morton @ 2013-01-29 22:51 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Tue, 29 Jan 2013 15:40:22 -0600
Seth Jennings <sjenning@linux.vnet.ibm.com> wrote:

> This patch promotes the slab-based zsmalloc memory allocator
> from the staging tree to lib/

Hate to rain on the parade, but...  we haven't reviewed zsmalloc
yet.  At least, I haven't, and I haven't seen others do so.

So how's about we forget that zsmalloc was previously in staging and
send the zsmalloc code out for review?  With a very good changelog
explaining why it exists, what problems it solves, etc.


I peeked.

Don't answer any of the below questions - they are examples of
concepts which should be accessible to readers of the
hopefully-forthcoming very-good-changelog.

- kmap_atomic() returns a void* - there's no need to cast its return value.

- Remove private MAX(), use the (much better implemented) max().

- It says "This was one of the major issues with its predecessor
  (xvmalloc)", but drivers/staging/ramster/xvmalloc.c is still in the tree.

- USE_PGTABLE_MAPPING should be done via Kconfig.

- USE_PGTABLE_MAPPING is interesting and the changelog should go into
  some details.  What are the pros and cons here?  Why do the two
  options exist?  Can we eliminate one mode or the other?

- Various functions are obscure and would benefit from explanatory
  comments.  Good comments explain "why it exists", more than "what it
  does".

  These include get_size_class_index, get_fullness_group,
  insert_zspage, remove_zspage, fix_fullness_group.

  Also a description of this handle encoding thing - what do these
  "handles" refer to?  Why is stuff being encoded into them and how?

- I don't understand how the whole thing works :( If I allocate a
  16 kbyte object with zs_malloc(), what do I get?  16k of
  contiguous memory?  How can it do that if
  USE_PGTABLE_MAPPING=false?  Obviously it can't so it's doing
  something else.  But what?

- What does zs_create_pool() do and how do I use it?  It appears
  to create a pool of all possible object sizes.  But why do we need
  more than one such pool kernel-wide?

- I tried to work out the actual value of ZS_SIZE_CLASSES but it
  made my head spin.

- We really really don't want to merge zsmalloc!  It would be far
  better to use an existing allocator (perhaps after modifying it)
  than to add yet another new one.  The really-good-changelog should
  be compelling on this point, please.

See, I (and I assume others) are totally on first base here and we need
to get through this before we can get onto zswap.  Sorry. 
drivers/staging is where code goes to be ignored :(

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 6/7] zswap: add flushing support
  2013-01-29 21:40 ` [PATCHv4 6/7] zswap: add flushing support Seth Jennings
@ 2013-01-29 23:03   ` Andrew Morton
  2013-02-01  7:27   ` Minchan Kim
  1 sibling, 0 replies; 34+ messages in thread
From: Andrew Morton @ 2013-01-29 23:03 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Tue, 29 Jan 2013 15:40:26 -0600
Seth Jennings <sjenning@linux.vnet.ibm.com> wrote:

> This patchset adds support for flush pages out of the compressed
> pool to the swap device

I do so hate that word "flush".  Sometimes it means "writeback", other
times it means "invalidate".  And perhaps it means "copy elsewhere then
reclaim".

Please describe with great specificity what this patch actually does
with pages, and why it does it.  And where the compression factors into
this.

The code appears to take a compressed page, decompress it into
swapcache via some means.  And then, for unexplained reasons, it starts
writeback of that swapcache page.

In zswap_flush_entry() there is a comment "page is already in the swap
cache, ignore for now".  This is very interesting.  How and why does
this come about?  Does it imply that there are two copies of the same
data floating around?  If so, how come?

Preferably all the above would be understandable by reading mm/zswap.c.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 7/7] zswap: add documentation
  2013-01-29 21:40 ` [PATCHv4 7/7] zswap: add documentation Seth Jennings
@ 2013-01-29 23:07   ` Andrew Morton
  0 siblings, 0 replies; 34+ messages in thread
From: Andrew Morton @ 2013-01-29 23:07 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Tue, 29 Jan 2013 15:40:27 -0600
Seth Jennings <sjenning@linux.vnet.ibm.com> wrote:

> This patch adds the documentation file for the zswap functionality

OK, that sort-of covers some of the things I asked about, although it
is rather skimpy.

It doesn't address pagefaults at all!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-01-29 22:49   ` Seth Jennings
@ 2013-01-30  4:32     ` Minchan Kim
  2013-01-30 16:01       ` Seth Jennings
  0 siblings, 1 reply; 34+ messages in thread
From: Minchan Kim @ 2013-01-30  4:32 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Joe Perches, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Tue, Jan 29, 2013 at 04:49:04PM -0600, Seth Jennings wrote:
> On 01/29/2013 04:14 PM, Joe Perches wrote:
> > On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
> >> The code required for the flushing is in a separate patch now
> >> as requested.
> > 
> > What tree does this apply to?
> > Both -next and linus fail to compile.
> 
> Link to build instruction in the cover letter:
> 
> >> NOTE: To build, read this:
> >> http://lkml.org/lkml/2013/1/28/586
> 
> The complexity is due to a conflict with a zsmalloc patch in Greg's
> staging tree that has yet to make its way upstream.
> 
> Sorry for the inconvenience.

Seth, Please don't ignore previous review if you didn't convince reviewer.
I don't want to consume time with arguing trivial things.

Copy and Paste from previous discussion from zsmalloc pathset

> > > On Fri, Jan 25, 2013 at 11:46:14AM -0600, Seth Jennings wrote:
> > >> These patches are the first 4 patches of the zswap patchset I
> > >> sent out previously.  Some recent commits to zsmalloc and
> > >> zcache in staging-next forced a rebase. While I was at it, Nitin
> > >> (zsmalloc maintainer) requested I break these 4 patches out from
> > >> the zswap patchset, since they stand on their own.
> > > 
> > > [2/4] and [4/4] is okay to merge current zsmalloc in staging but
> > > [1/4] and [3/4] is dependent on zswap so it should be part of
> > > zswap patchset.
> > 
> > Just to clarify, patches 1 and 3 are _not_ dependent on zswap.  They
> > just introduce changes that are only needed by zswap.
> 
> I don't think so. If zswap might be not merged, we don't need [1, 3]
> at the moment. You could argue that [1, 3] make zsmalloc more flexible
> and I agree. BUT I want it when we have needs. It would be not too late.
> So [1,3] should be part of zswap patchset.

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-01-30  4:32     ` Minchan Kim
@ 2013-01-30 16:01       ` Seth Jennings
  0 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-30 16:01 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Joe Perches, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 01/29/2013 10:32 PM, Minchan Kim wrote:
> On Tue, Jan 29, 2013 at 04:49:04PM -0600, Seth Jennings wrote:
>> On 01/29/2013 04:14 PM, Joe Perches wrote:
>>> On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
>>>> The code required for the flushing is in a separate patch now
>>>> as requested.
>>>
>>> What tree does this apply to?
>>> Both -next and linus fail to compile.
>>
>> Link to build instruction in the cover letter:
>>
>>>> NOTE: To build, read this:
>>>> http://lkml.org/lkml/2013/1/28/586
>>
>> The complexity is due to a conflict with a zsmalloc patch in Greg's
>> staging tree that has yet to make its way upstream.
>>
>> Sorry for the inconvenience.
> 
> Seth, Please don't ignore previous review if you didn't convince reviewer.
> I don't want to consume time with arguing trivial things.
> 
> Copy and Paste from previous discussion from zsmalloc pathset
> 
>>>> On Fri, Jan 25, 2013 at 11:46:14AM -0600, Seth Jennings wrote:
>>>>> These patches are the first 4 patches of the zswap patchset I
>>>>> sent out previously.  Some recent commits to zsmalloc and
>>>>> zcache in staging-next forced a rebase. While I was at it, Nitin
>>>>> (zsmalloc maintainer) requested I break these 4 patches out from
>>>>> the zswap patchset, since they stand on their own.
>>>>
>>>> [2/4] and [4/4] is okay to merge current zsmalloc in staging but
>>>> [1/4] and [3/4] is dependent on zswap so it should be part of
>>>> zswap patchset.
>>>
>>> Just to clarify, patches 1 and 3 are _not_ dependent on zswap.  They
>>> just introduce changes that are only needed by zswap.
>>
>> I don't think so. If zswap might be not merged, we don't need [1, 3]
>> at the moment. You could argue that [1, 3] make zsmalloc more flexible
>> and I agree. BUT I want it when we have needs. It would be not too late.
>> So [1,3] should be part of zswap patchset.

I apologize.  I am really trying to keep all the feedback straight,
and I didn't know what Greg was going to do with those zsmalloc
patches.  However, as of last night, he didn't accept the two you
mentioned as being tied to zswap-only functionality.

I'll bring them back into the patchset for v5 once I/we address
Andrew's feedback, which might take some time.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 2/7] zsmalloc: promote to lib/
  2013-01-29 22:51   ` Andrew Morton
@ 2013-01-30 16:28     ` Seth Jennings
  2013-01-30 23:34       ` Andrew Morton
  2013-01-31  5:35       ` Minchan Kim
  2013-02-13 16:00     ` Seth Jennings
  1 sibling, 2 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-30 16:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 01/29/2013 04:51 PM, Andrew Morton wrote:
> On Tue, 29 Jan 2013 15:40:22 -0600
> Seth Jennings <sjenning@linux.vnet.ibm.com> wrote:
> 
>> This patch promotes the slab-based zsmalloc memory allocator
>> from the staging tree to lib/
> 
> Hate to rain on the parade, but...  we haven't reviewed zsmalloc
> yet.  At least, I haven't, and I haven't seen others do so.
> 
> So how's about we forget that zsmalloc was previously in staging and
> send the zsmalloc code out for review?  With a very good changelog
> explaining why it exists, what problems it solves, etc.
> 
> 
> I peeked.
> 
> Don't answer any of the below questions - they are examples of
> concepts which should be accessible to readers of the
> hopefully-forthcoming very-good-changelog.
> 
> - kmap_atomic() returns a void* - there's no need to cast its return value.
> 
> - Remove private MAX(), use the (much better implemented) max().
> 
> - It says "This was one of the major issues with its predecessor
>   (xvmalloc)", but drivers/staging/ramster/xvmalloc.c is still in the tree.
> 
> - USE_PGTABLE_MAPPING should be done via Kconfig.
> 
> - USE_PGTABLE_MAPPING is interesting and the changelog should go into
>   some details.  What are the pros and cons here?  Why do the two
>   options exist?  Can we eliminate one mode or the other?
> 
> - Various functions are obscure and would benefit from explanatory
>   comments.  Good comments explain "why it exists", more than "what it
>   does".
> 
>   These include get_size_class_index, get_fullness_group,
>   insert_zspage, remove_zspage, fix_fullness_group.
> 
>   Also a description of this handle encoding thing - what do these
>   "handles" refer to?  Why is stuff being encoded into them and how?
> 
> - I don't understand how the whole thing works :( If I allocate a
>   16 kbyte object with zs_malloc(), what do I get?  16k of
>   contiguous memory?  How can it do that if
>   USE_PGTABLE_MAPPING=false?  Obviously it can't so it's doing
>   something else.  But what?
> 
> - What does zs_create_pool() do and how do I use it?  It appears
>   to create a pool of all possible object sizes.  But why do we need
>   more than one such pool kernel-wide?
> 
> - I tried to work out the actual value of ZS_SIZE_CLASSES but it
>   made my head spin.
> 
> - We really really don't want to merge zsmalloc!  It would be far
>   better to use an existing allocator (perhaps after modifying it)
>   than to add yet another new one.  The really-good-changelog should
>   be compelling on this point, please.
> 
> See, I (and I assume others) are totally on first base here and we need
> to get through this before we can get onto zswap.  Sorry. 
> drivers/staging is where code goes to be ignored :(

I've noticed :-/

Thank you very much for your review!  I'll work with Nitin and Minchan
to beef up the documentation so that the answers to your questions are
more readily apparent in the code/comments.

I'll also convert the zsmalloc promotion patch back into a full-diff
patch rather than a rename patch so that people can review and comment
inline.

Question, are you saying that you'd like to see the zsmalloc promotion
in a separate patch?

My reason for including the zsmalloc promotion inside the zswap
patches was that it promoted and introduced a user all together.
However, I don't have an issue with breaking it out.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 2/7] zsmalloc: promote to lib/
  2013-01-30 16:28     ` Seth Jennings
@ 2013-01-30 23:34       ` Andrew Morton
  2013-01-31  5:35       ` Minchan Kim
  1 sibling, 0 replies; 34+ messages in thread
From: Andrew Morton @ 2013-01-30 23:34 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Wed, 30 Jan 2013 10:28:41 -0600
Seth Jennings <sjenning@linux.vnet.ibm.com> wrote:

> Question, are you saying that you'd like to see the zsmalloc promotion
> in a separate patch?
> 
> My reason for including the zsmalloc promotion inside the zswap
> patches was that it promoted and introduced a user all together.
> However, I don't have an issue with breaking it out.

Keeping it all in the one series is logical.  I want to see the code
get the normal explain/review/comment process.  Michan has been doing
sterling work here but I don't think the rest of us understand the code
as well as we can and should.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 2/7] zsmalloc: promote to lib/
  2013-01-30 16:28     ` Seth Jennings
  2013-01-30 23:34       ` Andrew Morton
@ 2013-01-31  5:35       ` Minchan Kim
  1 sibling, 0 replies; 34+ messages in thread
From: Minchan Kim @ 2013-01-31  5:35 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Wed, Jan 30, 2013 at 10:28:41AM -0600, Seth Jennings wrote:
> On 01/29/2013 04:51 PM, Andrew Morton wrote:
> > On Tue, 29 Jan 2013 15:40:22 -0600
> > Seth Jennings <sjenning@linux.vnet.ibm.com> wrote:
> > 
> >> This patch promotes the slab-based zsmalloc memory allocator
> >> from the staging tree to lib/
> > 
> > Hate to rain on the parade, but...  we haven't reviewed zsmalloc
> > yet.  At least, I haven't, and I haven't seen others do so.
> > 
> > So how's about we forget that zsmalloc was previously in staging and
> > send the zsmalloc code out for review?  With a very good changelog
> > explaining why it exists, what problems it solves, etc.
> > 
> > 
> > I peeked.
> > 
> > Don't answer any of the below questions - they are examples of
> > concepts which should be accessible to readers of the
> > hopefully-forthcoming very-good-changelog.
> > 
> > - kmap_atomic() returns a void* - there's no need to cast its return value.
> > 
> > - Remove private MAX(), use the (much better implemented) max().
> > 
> > - It says "This was one of the major issues with its predecessor
> >   (xvmalloc)", but drivers/staging/ramster/xvmalloc.c is still in the tree.
> > 
> > - USE_PGTABLE_MAPPING should be done via Kconfig.
> > 
> > - USE_PGTABLE_MAPPING is interesting and the changelog should go into
> >   some details.  What are the pros and cons here?  Why do the two
> >   options exist?  Can we eliminate one mode or the other?
> > 
> > - Various functions are obscure and would benefit from explanatory
> >   comments.  Good comments explain "why it exists", more than "what it
> >   does".
> > 
> >   These include get_size_class_index, get_fullness_group,
> >   insert_zspage, remove_zspage, fix_fullness_group.
> > 
> >   Also a description of this handle encoding thing - what do these
> >   "handles" refer to?  Why is stuff being encoded into them and how?
> > 
> > - I don't understand how the whole thing works :( If I allocate a
> >   16 kbyte object with zs_malloc(), what do I get?  16k of
> >   contiguous memory?  How can it do that if
> >   USE_PGTABLE_MAPPING=false?  Obviously it can't so it's doing
> >   something else.  But what?
> > 
> > - What does zs_create_pool() do and how do I use it?  It appears
> >   to create a pool of all possible object sizes.  But why do we need
> >   more than one such pool kernel-wide?
> > 
> > - I tried to work out the actual value of ZS_SIZE_CLASSES but it
> >   made my head spin.
> > 
> > - We really really don't want to merge zsmalloc!  It would be far
> >   better to use an existing allocator (perhaps after modifying it)
> >   than to add yet another new one.  The really-good-changelog should
> >   be compelling on this point, please.
> > 
> > See, I (and I assume others) are totally on first base here and we need
> > to get through this before we can get onto zswap.  Sorry. 
> > drivers/staging is where code goes to be ignored :(
> 
> I've noticed :-/
> 
> Thank you very much for your review!  I'll work with Nitin and Minchan
> to beef up the documentation so that the answers to your questions are
> more readily apparent in the code/comments.

Actually, Kconfig of USE_PGTABLE_MAPPING is my plan.
Will do it if anyone has no objection.

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 3/7] zswap: add to mm/
  2013-01-29 21:40 ` [PATCHv4 3/7] zswap: add to mm/ Seth Jennings
@ 2013-01-31  7:07   ` Minchan Kim
  2013-01-31 19:06     ` Seth Jennings
  0 siblings, 1 reply; 34+ messages in thread
From: Minchan Kim @ 2013-01-31  7:07 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Tue, Jan 29, 2013 at 03:40:23PM -0600, Seth Jennings wrote:
> zswap is a thin compression backend for frontswap. It receives
> pages from frontswap and attempts to store them in a compressed
> memory pool, resulting in an effective partial memory reclaim and
> dramatically reduced swap device I/O.
> 
> Additionally, in most cases, pages can be retrieved from this
> compressed store much more quickly than reading from tradition
> swap devices resulting in faster performance for many workloads.
> 
> This patch adds the zswap driver to mm/
> 
> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> ---
>  mm/Kconfig  |  15 ++
>  mm/Makefile |   1 +
>  mm/zswap.c  | 656 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 672 insertions(+)
>  create mode 100644 mm/zswap.c
> 
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 278e3ab..14b9acb 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -446,3 +446,18 @@ config FRONTSWAP
>  	  and swap data is stored as normal on the matching swap device.
>  
>  	  If unsure, say Y to enable frontswap.
> +
> +config ZSWAP
> +	bool "In-kernel swap page compression"
> +	depends on FRONTSWAP && CRYPTO
> +	select CRYPTO_LZO
> +	select ZSMALLOC

Again, I'm asking why zswap should have a dependent on CRPYTO?
Couldn't we support it as a option? I'd like to use zswap without CRYPTO
like zram.

> +	default n
> +	help
> +	  Zswap is a backend for the frontswap mechanism in the VMM.
> +	  It receives pages from frontswap and attempts to store them
> +	  in a compressed memory pool, resulting in an effective
> +	  partial memory reclaim.  In addition, pages and be retrieved
> +	  from this compressed store much faster than most tradition
> +	  swap devices resulting in reduced I/O and faster performance
> +	  for many workloads.
> diff --git a/mm/Makefile b/mm/Makefile
> index 3a46287..1b1ed5c 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
>  obj-$(CONFIG_BOUNCE)	+= bounce.o
>  obj-$(CONFIG_SWAP)	+= page_io.o swap_state.o swapfile.o
>  obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
> +obj-$(CONFIG_ZSWAP)	+= zswap.o
>  obj-$(CONFIG_HAS_DMA)	+= dmapool.o
>  obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
>  obj-$(CONFIG_NUMA) 	+= mempolicy.o
> diff --git a/mm/zswap.c b/mm/zswap.c
> new file mode 100644
> index 0000000..a6c2928
> --- /dev/null
> +++ b/mm/zswap.c
> @@ -0,0 +1,656 @@
> +/*
> + * zswap-drv.c - zswap driver file
> + *
> + * zswap is a backend for frontswap that takes pages that are in the
> + * process of being swapped out and attempts to compress them and store
> + * them in a RAM-based memory pool.  This results in a significant I/O
> + * reduction on the real swap device and, in the case of a slow swap
> + * device, can also improve workload performance.
> + *
> + * Copyright (C) 2012  Seth Jennings <sjenning@linux.vnet.ibm.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * as published by the Free Software Foundation; either version 2
> + * of the License, or (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> +*/
> +
> +#include <linux/module.h>
> +#include <linux/cpu.h>
> +#include <linux/highmem.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +#include <linux/atomic.h>
> +#include <linux/frontswap.h>
> +#include <linux/rbtree.h>
> +#include <linux/swap.h>
> +#include <linux/crypto.h>
> +#include <linux/mempool.h>
> +#include <linux/zsmalloc.h>
> +
> +/*********************************
> +* statistics
> +**********************************/
> +/* Number of memory pages used by the compressed pool */
> +static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
> +/* The number of compressed pages currently stored in zswap */
> +static atomic_t zswap_stored_pages = ATOMIC_INIT(0);

Just nitpicking.

It seems future zsmalloc users want to gather information
like this so it might be nice to add them into zsmalloc's
internal like malloc_stats(2)

> +
> +/*
> + * The statistics below are not protected from concurrent access for
> + * performance reasons so they may not be a 100% accurate.  However,
> + * the do provide useful information on roughly how many times a
> + * certain event is occurring.
> +*/
> +static u64 zswap_pool_limit_hit;
> +static u64 zswap_reject_compress_poor;
> +static u64 zswap_reject_zsmalloc_fail;
> +static u64 zswap_reject_kmemcache_fail;
> +static u64 zswap_duplicate_entry;
> +
> +/*********************************
> +* tunables
> +**********************************/
> +/* Enable/disable zswap (disabled by default, fixed at boot for now) */
> +static bool zswap_enabled;
> +module_param_named(enabled, zswap_enabled, bool, 0);
> +
> +/* Compressor to be used by zswap (fixed at boot for now) */
> +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
> +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> +module_param_named(compressor, zswap_compressor, charp, 0);
> +
> +/* The maximum percentage of memory that the compressed pool can occupy */
> +static unsigned int zswap_max_pool_percent = 20;
> +module_param_named(max_pool_percent,
> +			zswap_max_pool_percent, uint, 0644);
> +
> +/*
> + * Maximum compression ratio, as as percentage, for an acceptable
> + * compressed page. Any pages that do not compress by at least
> + * this ratio will be rejected.
> +*/
> +static unsigned int zswap_max_compression_ratio = 80;
> +module_param_named(max_compression_ratio,
> +			zswap_max_compression_ratio, uint, 0644);
> +
> +/*********************************
> +* compression functions
> +**********************************/
> +/* per-cpu compression transforms */
> +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
> +
> +enum comp_op {
> +	ZSWAP_COMPOP_COMPRESS,
> +	ZSWAP_COMPOP_DECOMPRESS
> +};
> +
> +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
> +				u8 *dst, unsigned int *dlen)
> +{
> +	struct crypto_comp *tfm;
> +	int ret;
> +
> +	tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());
> +	switch (op) {
> +	case ZSWAP_COMPOP_COMPRESS:
> +		ret = crypto_comp_compress(tfm, src, slen, dst, dlen);
> +		break;
> +	case ZSWAP_COMPOP_DECOMPRESS:
> +		ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);
> +		break;
> +	default:
> +		ret = -EINVAL;
> +	}
> +
> +	put_cpu();
> +	return ret;
> +}
> +
> +static int __init zswap_comp_init(void)
> +{
> +	if (!crypto_has_comp(zswap_compressor, 0, 0)) {
> +		pr_info("zswap: %s compressor not available\n",
> +			zswap_compressor);
> +		/* fall back to default compressor */
> +		zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
> +		if (!crypto_has_comp(zswap_compressor, 0, 0))
> +			/* can't even load the default compressor */
> +			return -ENODEV;
> +	}
> +	pr_info("zswap: using %s compressor\n", zswap_compressor);
> +
> +	/* alloc percpu transforms */
> +	zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
> +	if (!zswap_comp_pcpu_tfms)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static void zswap_comp_exit(void)
> +{
> +	/* free percpu transforms */
> +	if (zswap_comp_pcpu_tfms)
> +		free_percpu(zswap_comp_pcpu_tfms);
> +}
> +
> +/*********************************
> +* data structures
> +**********************************/
> +struct zswap_entry {
> +	struct rb_node rbnode;
> +	unsigned type;
> +	pgoff_t offset;
> +	unsigned long handle;
> +	unsigned int length;
> +};
> +
> +struct zswap_tree {
> +	struct rb_root rbroot;
> +	spinlock_t lock;
> +	struct zs_pool *pool;
> +};
> +
> +static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
> +
> +/*********************************
> +* zswap entry functions
> +**********************************/
> +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"
> +static struct kmem_cache *zswap_entry_cache;
> +
> +static inline int zswap_entry_cache_create(void)
> +{
> +	zswap_entry_cache =
> +		kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
> +			sizeof(struct zswap_entry), 0, 0, NULL);
> +	return (zswap_entry_cache == NULL);
> +}
> +
> +static inline void zswap_entry_cache_destory(void)
> +{
> +	kmem_cache_destroy(zswap_entry_cache);
> +}
> +
> +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
> +{
> +	struct zswap_entry *entry;
> +	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
> +	if (!entry)
> +		return NULL;
> +	return entry;
> +}
> +
> +static inline void zswap_entry_cache_free(struct zswap_entry *entry)
> +{
> +	kmem_cache_free(zswap_entry_cache, entry);
> +}
> +
> +/*********************************
> +* rbtree functions
> +**********************************/
> +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
> +{
> +	struct rb_node *node = root->rb_node;
> +	struct zswap_entry *entry;
> +
> +	while (node) {
> +		entry = rb_entry(node, struct zswap_entry, rbnode);
> +		if (entry->offset > offset)
> +			node = node->rb_left;
> +		else if (entry->offset < offset)
> +			node = node->rb_right;
> +		else
> +			return entry;
> +	}
> +	return NULL;
> +}
> +
> +/*
> + * In the case that a entry with the same offset is found, it a pointer to
> + * the existing entry is stored in dupentry and the function returns -EEXIST
> +*/
> +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
> +			struct zswap_entry **dupentry)
> +{
> +	struct rb_node **link = &root->rb_node, *parent = NULL;
> +	struct zswap_entry *myentry;
> +
> +	while (*link) {
> +		parent = *link;
> +		myentry = rb_entry(parent, struct zswap_entry, rbnode);
> +		if (myentry->offset > entry->offset)
> +			link = &(*link)->rb_left;
> +		else if (myentry->offset < entry->offset)
> +			link = &(*link)->rb_right;
> +		else {
> +			*dupentry = myentry;
> +			return -EEXIST;

I sent a mail to frontswap guys to remove existing entry handling.
The frontswap_store can be called on existing entry if reuse_swap_page
encounters writeback-ing page. Let's see how it is going.

> +		}
> +	}
> +	rb_link_node(&entry->rbnode, parent, link);
> +	rb_insert_color(&entry->rbnode, root);
> +	return 0;
> +}
> +
> +/*********************************
> +* per-cpu code
> +**********************************/
> +static DEFINE_PER_CPU(u8 *, zswap_dstmem);
> +
> +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
> +{
> +	struct crypto_comp *tfm;
> +	u8 *dst;
> +
> +	switch (action) {
> +	case CPU_UP_PREPARE:
> +		tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
> +		if (IS_ERR(tfm)) {
> +			pr_err("zswap: can't allocate compressor transform\n");
> +			return NOTIFY_BAD;
> +		}
> +		*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
> +		dst = (u8 *)__get_free_pages(GFP_KERNEL, 1);
> +		if (!dst) {
> +			pr_err("zswap: can't allocate compressor buffer\n");
> +			crypto_free_comp(tfm);
> +			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> +			return NOTIFY_BAD;
> +		}
> +		per_cpu(zswap_dstmem, cpu) = dst;
> +		break;
> +	case CPU_DEAD:
> +	case CPU_UP_CANCELED:
> +		tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
> +		if (tfm) {
> +			crypto_free_comp(tfm);
> +			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
> +		}
> +		dst = per_cpu(zswap_dstmem, cpu);
> +		if (dst) {
> +			free_pages((unsigned long)dst, 1);
> +			per_cpu(zswap_dstmem, cpu) = NULL;
> +		}
> +		break;
> +	default:
> +		break;
> +	}
> +	return NOTIFY_OK;
> +}
> +
> +static int zswap_cpu_notifier(struct notifier_block *nb,
> +				unsigned long action, void *pcpu)
> +{
> +	unsigned long cpu = (unsigned long)pcpu;
> +	return __zswap_cpu_notifier(action, cpu);
> +}
> +
> +static struct notifier_block zswap_cpu_notifier_block = {
> +	.notifier_call = zswap_cpu_notifier
> +};
> +
> +static int zswap_cpu_init(void)
> +{
> +	unsigned long cpu;
> +
> +	get_online_cpus();
> +	for_each_online_cpu(cpu)
> +		if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
> +			goto cleanup;
> +	register_cpu_notifier(&zswap_cpu_notifier_block);
> +	put_online_cpus();
> +	return 0;
> +
> +cleanup:
> +	for_each_online_cpu(cpu)
> +		__zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
> +	put_online_cpus();
> +	return -ENOMEM;
> +}
> +
> +/*********************************
> +* zsmalloc callbacks
> +**********************************/
> +static mempool_t *zswap_page_pool;
> +
> +static inline unsigned int zswap_max_pool_pages(void)
> +{
> +	return zswap_max_pool_percent * totalram_pages / 100;
> +}
> +
> +static inline int zswap_page_pool_create(void)
> +{
> +	/* TODO: dynamically size mempool */
> +	zswap_page_pool = mempool_create_page_pool(256, 0);
> +	if (!zswap_page_pool)
> +		return -ENOMEM;
> +	return 0;
> +}
> +
> +static inline void zswap_page_pool_destroy(void)
> +{
> +	mempool_destroy(zswap_page_pool);
> +}
> +
> +static struct page *zswap_alloc_page(gfp_t flags)
> +{
> +	struct page *page;
> +
> +	if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) {
> +		zswap_pool_limit_hit++;
> +		return NULL;
> +	}
> +	page = mempool_alloc(zswap_page_pool, flags);
> +	if (page)
> +		atomic_inc(&zswap_pool_pages);
> +	return page;
> +}
> +
> +static void zswap_free_page(struct page *page)
> +{
> +	if (!page)
> +		return;
> +	mempool_free(page, zswap_page_pool);
> +	atomic_dec(&zswap_pool_pages);
> +}
> +
> +static struct zs_ops zswap_zs_ops = {
> +	.alloc = zswap_alloc_page,
> +	.free = zswap_free_page
> +};
> +
> +/*********************************
> +* frontswap hooks
> +**********************************/
> +/* attempts to compress and store an single page */
> +static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *page)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry, *dupentry;
> +	int ret;
> +	unsigned int dlen = PAGE_SIZE;
> +	unsigned long handle;
> +	char *buf;
> +	u8 *src, *dst;
> +
> +	if (!tree) {

We should fix frontswap_init can return error code instead of
checking validation of init everytime.
I will fix it after Konrad merged eariler frontswap patch for
frontswap_init call out of swap_lock.

> +		ret = -ENODEV;
> +		goto reject;
> +	}
> +
> +	/* compress */
> +	dst = get_cpu_var(zswap_dstmem);
> +	src = kmap_atomic(page);
> +	ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
> +	kunmap_atomic(src);
> +	if (ret) {
> +		ret = -EINVAL;
> +		goto putcpu;
> +	}
> +	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
> +		zswap_reject_compress_poor++;
> +		ret = -E2BIG;
> +		goto putcpu;
> +	}
> +
> +	/* store */
> +	handle = zs_malloc(tree->pool, dlen,
> +		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> +			__GFP_NOWARN);
> +	if (!handle) {
> +		zswap_reject_zsmalloc_fail++;
> +		ret = -ENOMEM;
> +		goto putcpu;
> +	}
> +
> +	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
> +	memcpy(buf, dst, dlen);
> +	zs_unmap_object(tree->pool, handle);
> +	put_cpu_var(zswap_dstmem);
> +
> +	/* allocate entry */
> +	entry = zswap_entry_cache_alloc(GFP_KERNEL);
> +	if (!entry) {
> +		zs_free(tree->pool, handle);
> +		zswap_reject_kmemcache_fail++;
> +		ret = -ENOMEM;
> +		goto reject;
> +	}
> +
> +	/* populate entry */
> +	entry->type = type;
> +	entry->offset = offset;
> +	entry->handle = handle;
> +	entry->length = dlen;
> +
> +	/* map */
> +	spin_lock(&tree->lock);
> +	do {
> +		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
> +		if (ret == -EEXIST) {
> +			zswap_duplicate_entry++;
> +
> +			/* remove from rbtree */
> +			rb_erase(&dupentry->rbnode, &tree->rbroot);
> +			

While spaces damaged.
Will continue to review tomorrow.

> +			/* free */
> +			zs_free(tree->pool, dupentry->handle);
> +			zswap_entry_cache_free(dupentry);
> +			atomic_dec(&zswap_stored_pages);
> +		}
> +	} while (ret == -EEXIST);
> +	spin_unlock(&tree->lock);
> +
> +	/* update stats */
> +	atomic_inc(&zswap_stored_pages);
> +
> +	return 0;
> +
> +putcpu:
> +	put_cpu_var(zswap_dstmem);
> +reject:
> +	return ret;
> +}
> +
> +/*
> + * returns 0 if the page was successfully decompressed
> + * return -1 on entry not found or error
> +*/
> +static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry;
> +	u8 *src, *dst;
> +	unsigned int dlen;
> +
> +	/* find */
> +	spin_lock(&tree->lock);
> +	entry = zswap_rb_search(&tree->rbroot, offset);
> +	spin_unlock(&tree->lock);
> +
> +	/* decompress */
> +	dlen = PAGE_SIZE;
> +	src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> +	dst = kmap_atomic(page);
> +	zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> +		dst, &dlen);
> +	kunmap_atomic(dst);
> +	zs_unmap_object(tree->pool, entry->handle);
> +
> +	return 0;
> +}
> +
> +/* invalidates a single page */
> +static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry;
> +
> +	/* find */
> +	spin_lock(&tree->lock);
> +	entry = zswap_rb_search(&tree->rbroot, offset);
> +
> +	/* remove from rbtree */
> +	rb_erase(&entry->rbnode, &tree->rbroot);
> +	spin_unlock(&tree->lock);
> +
> +	/* free */
> +	zs_free(tree->pool, entry->handle);
> +	zswap_entry_cache_free(entry);
> +	atomic_dec(&zswap_stored_pages);
> +}
> +
> +/* invalidates all pages for the given swap type */
> +static void zswap_frontswap_invalidate_area(unsigned type)
> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct rb_node *node, *next;
> +	struct zswap_entry *entry;
> +
> +	if (!tree)
> +		return;
> +
> +	/* walk the tree and free everything */
> +	spin_lock(&tree->lock);
> +	node = rb_first(&tree->rbroot);
> +	while (node) {
> +		entry = rb_entry(node, struct zswap_entry, rbnode);
> +		zs_free(tree->pool, entry->handle);
> +		next = rb_next(node);
> +		zswap_entry_cache_free(entry);
> +		node = next;
> +	}
> +	tree->rbroot = RB_ROOT;
> +	spin_unlock(&tree->lock);
> +}
> +
> +/* NOTE: this is called in atomic context from swapon and must not sleep */
> +static void zswap_frontswap_init(unsigned type)
> +{
> +	struct zswap_tree *tree;
> +
> +	tree = kzalloc(sizeof(struct zswap_tree), GFP_NOWAIT);
> +	if (!tree)
> +		goto err;
> +	tree->pool = zs_create_pool(GFP_NOWAIT, &zswap_zs_ops);
> +	if (!tree->pool)
> +		goto freetree;
> +	tree->rbroot = RB_ROOT;
> +	spin_lock_init(&tree->lock);
> +	zswap_trees[type] = tree;
> +	return;
> +
> +freetree:
> +	kfree(tree);
> +err:
> +	pr_err("zswap: alloc failed, zswap disabled for swap type %d\n", type);
> +}
> +
> +static struct frontswap_ops zswap_frontswap_ops = {
> +	.store = zswap_frontswap_store,
> +	.load = zswap_frontswap_load,
> +	.invalidate_page = zswap_frontswap_invalidate_page,
> +	.invalidate_area = zswap_frontswap_invalidate_area,
> +	.init = zswap_frontswap_init
> +};
> +
> +/*********************************
> +* debugfs functions
> +**********************************/
> +#ifdef CONFIG_DEBUG_FS
> +#include <linux/debugfs.h>
> +
> +static struct dentry *zswap_debugfs_root;
> +
> +static int __init zswap_debugfs_init(void)
> +{
> +	if (!debugfs_initialized())
> +		return -ENODEV;
> +
> +	zswap_debugfs_root = debugfs_create_dir("zswap", NULL);
> +	if (!zswap_debugfs_root)
> +		return -ENOMEM;
> +
> +	debugfs_create_u64("pool_limit_hit", S_IRUGO,
> +			zswap_debugfs_root, &zswap_pool_limit_hit);
> +	debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
> +	debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_kmemcache_fail);
> +	debugfs_create_u64("reject_compress_poor", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_compress_poor);
> +	debugfs_create_u64("duplicate_entry", S_IRUGO,
> +			zswap_debugfs_root, &zswap_duplicate_entry);
> +	debugfs_create_atomic_t("pool_pages", S_IRUGO,
> +			zswap_debugfs_root, &zswap_pool_pages);
> +	debugfs_create_atomic_t("stored_pages", S_IRUGO,
> +			zswap_debugfs_root, &zswap_stored_pages);
> +
> +	return 0;
> +}
> +
> +static void __exit zswap_debugfs_exit(void)
> +{
> +	if (zswap_debugfs_root)
> +		debugfs_remove_recursive(zswap_debugfs_root);
> +}
> +#else
> +static inline int __init zswap_debugfs_init(void)
> +{
> +	return 0;
> +}
> +
> +static inline void __exit zswap_debugfs_exit(void) { }
> +#endif
> +
> +/*********************************
> +* module init and exit
> +**********************************/
> +static int __init init_zswap(void)
> +{
> +	if (!zswap_enabled)
> +		return 0;
> +
> +	pr_info("loading zswap\n");
> +	if (zswap_entry_cache_create()) {
> +		pr_err("zswap: entry cache creation failed\n");
> +		goto error;
> +	}
> +	if (zswap_page_pool_create()) {
> +		pr_err("zswap: page pool initialization failed\n");
> +		goto pagepoolfail;
> +	}
> +	if (zswap_comp_init()) {
> +		pr_err("zswap: compressor initialization failed\n");
> +		goto compfail;
> +	}
> +	if (zswap_cpu_init()) {
> +		pr_err("zswap: per-cpu initialization failed\n");
> +		goto pcpufail;
> +	}
> +	frontswap_register_ops(&zswap_frontswap_ops);
> +	if (zswap_debugfs_init())
> +		pr_warn("zswap: debugfs initialization failed\n");
> +	return 0;
> +pcpufail:
> +	zswap_comp_exit();
> +compfail:
> +	zswap_page_pool_destroy();
> +pagepoolfail:
> +	zswap_entry_cache_destory();
> +error:
> +	return -ENOMEM;
> +}
> +/* must be late so crypto has time to come up */
> +late_initcall(init_zswap);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Seth Jennings <sjenning@linux.vnet.ibm.com>");
> +MODULE_DESCRIPTION("Compression backend for frontswap pages");
> -- 
> 1.8.1.1
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 3/7] zswap: add to mm/
  2013-01-31  7:07   ` Minchan Kim
@ 2013-01-31 19:06     ` Seth Jennings
  2013-01-31 20:07       ` Robert Jennings
  2013-02-01  2:38       ` Minchan Kim
  0 siblings, 2 replies; 34+ messages in thread
From: Seth Jennings @ 2013-01-31 19:06 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 01/31/2013 01:07 AM, Minchan Kim wrote:
> On Tue, Jan 29, 2013 at 03:40:23PM -0600, Seth Jennings wrote:
>> zswap is a thin compression backend for frontswap. It receives
>> pages from frontswap and attempts to store them in a compressed
>> memory pool, resulting in an effective partial memory reclaim and
>> dramatically reduced swap device I/O.
>>
>> Additionally, in most cases, pages can be retrieved from this
>> compressed store much more quickly than reading from tradition
>> swap devices resulting in faster performance for many workloads.
>>
>> This patch adds the zswap driver to mm/
>>
>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>> ---
>>  mm/Kconfig  |  15 ++
>>  mm/Makefile |   1 +
>>  mm/zswap.c  | 656 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>  3 files changed, 672 insertions(+)
>>  create mode 100644 mm/zswap.c
>>
>> diff --git a/mm/Kconfig b/mm/Kconfig
>> index 278e3ab..14b9acb 100644
>> --- a/mm/Kconfig
>> +++ b/mm/Kconfig
>> @@ -446,3 +446,18 @@ config FRONTSWAP
>>  	  and swap data is stored as normal on the matching swap device.
>>  
>>  	  If unsure, say Y to enable frontswap.
>> +
>> +config ZSWAP
>> +	bool "In-kernel swap page compression"
>> +	depends on FRONTSWAP && CRYPTO
>> +	select CRYPTO_LZO
>> +	select ZSMALLOC
> 
> Again, I'm asking why zswap should have a dependent on CRPYTO?
> Couldn't we support it as a option? I'd like to use zswap without CRYPTO
> like zram.

The reason we need CRYPTO is that zswap uses it to support a pluggable
compression model.  zswap can use any compressor that has a crypto API
driver.  zswap has _symbol dependencies_ on CRYPTO.  If it isn't
selected, the build breaks.

> 
>> +	default n
>> +	help
>> +	  Zswap is a backend for the frontswap mechanism in the VMM.
>> +	  It receives pages from frontswap and attempts to store them
>> +	  in a compressed memory pool, resulting in an effective
>> +	  partial memory reclaim.  In addition, pages and be retrieved
>> +	  from this compressed store much faster than most tradition
>> +	  swap devices resulting in reduced I/O and faster performance
>> +	  for many workloads.
>> diff --git a/mm/Makefile b/mm/Makefile
>> index 3a46287..1b1ed5c 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -32,6 +32,7 @@ obj-$(CONFIG_HAVE_MEMBLOCK) += memblock.o
>>  obj-$(CONFIG_BOUNCE)	+= bounce.o
>>  obj-$(CONFIG_SWAP)	+= page_io.o swap_state.o swapfile.o
>>  obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
>> +obj-$(CONFIG_ZSWAP)	+= zswap.o
>>  obj-$(CONFIG_HAS_DMA)	+= dmapool.o
>>  obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
>>  obj-$(CONFIG_NUMA) 	+= mempolicy.o
>> diff --git a/mm/zswap.c b/mm/zswap.c
>> new file mode 100644
>> index 0000000..a6c2928
>> --- /dev/null
>> +++ b/mm/zswap.c
>> @@ -0,0 +1,656 @@
>> +/*
>> + * zswap-drv.c - zswap driver file
>> + *
>> + * zswap is a backend for frontswap that takes pages that are in the
>> + * process of being swapped out and attempts to compress them and store
>> + * them in a RAM-based memory pool.  This results in a significant I/O
>> + * reduction on the real swap device and, in the case of a slow swap
>> + * device, can also improve workload performance.
>> + *
>> + * Copyright (C) 2012  Seth Jennings <sjenning@linux.vnet.ibm.com>
>> + *
>> + * This program is free software; you can redistribute it and/or
>> + * modify it under the terms of the GNU General Public License
>> + * as published by the Free Software Foundation; either version 2
>> + * of the License, or (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> +*/
>> +
>> +#include <linux/module.h>
>> +#include <linux/cpu.h>
>> +#include <linux/highmem.h>
>> +#include <linux/slab.h>
>> +#include <linux/spinlock.h>
>> +#include <linux/types.h>
>> +#include <linux/atomic.h>
>> +#include <linux/frontswap.h>
>> +#include <linux/rbtree.h>
>> +#include <linux/swap.h>
>> +#include <linux/crypto.h>
>> +#include <linux/mempool.h>
>> +#include <linux/zsmalloc.h>
>> +
>> +/*********************************
>> +* statistics
>> +**********************************/
>> +/* Number of memory pages used by the compressed pool */
>> +static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
>> +/* The number of compressed pages currently stored in zswap */
>> +static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> 
> Just nitpicking.
> 
> It seems future zsmalloc users want to gather information
> like this so it might be nice to add them into zsmalloc's
> internal like malloc_stats(2)
> 
>> +
>> +/*
>> + * The statistics below are not protected from concurrent access for
>> + * performance reasons so they may not be a 100% accurate.  However,
>> + * the do provide useful information on roughly how many times a
>> + * certain event is occurring.
>> +*/
>> +static u64 zswap_pool_limit_hit;
>> +static u64 zswap_reject_compress_poor;
>> +static u64 zswap_reject_zsmalloc_fail;
>> +static u64 zswap_reject_kmemcache_fail;
>> +static u64 zswap_duplicate_entry;
>> +
>> +/*********************************
>> +* tunables
>> +**********************************/
>> +/* Enable/disable zswap (disabled by default, fixed at boot for now) */
>> +static bool zswap_enabled;
>> +module_param_named(enabled, zswap_enabled, bool, 0);
>> +
>> +/* Compressor to be used by zswap (fixed at boot for now) */
>> +#define ZSWAP_COMPRESSOR_DEFAULT "lzo"
>> +static char *zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
>> +module_param_named(compressor, zswap_compressor, charp, 0);
>> +
>> +/* The maximum percentage of memory that the compressed pool can occupy */
>> +static unsigned int zswap_max_pool_percent = 20;
>> +module_param_named(max_pool_percent,
>> +			zswap_max_pool_percent, uint, 0644);
>> +
>> +/*
>> + * Maximum compression ratio, as as percentage, for an acceptable
>> + * compressed page. Any pages that do not compress by at least
>> + * this ratio will be rejected.
>> +*/
>> +static unsigned int zswap_max_compression_ratio = 80;
>> +module_param_named(max_compression_ratio,
>> +			zswap_max_compression_ratio, uint, 0644);
>> +
>> +/*********************************
>> +* compression functions
>> +**********************************/
>> +/* per-cpu compression transforms */
>> +static struct crypto_comp * __percpu *zswap_comp_pcpu_tfms;
>> +
>> +enum comp_op {
>> +	ZSWAP_COMPOP_COMPRESS,
>> +	ZSWAP_COMPOP_DECOMPRESS
>> +};
>> +
>> +static int zswap_comp_op(enum comp_op op, const u8 *src, unsigned int slen,
>> +				u8 *dst, unsigned int *dlen)
>> +{
>> +	struct crypto_comp *tfm;
>> +	int ret;
>> +
>> +	tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, get_cpu());
>> +	switch (op) {
>> +	case ZSWAP_COMPOP_COMPRESS:
>> +		ret = crypto_comp_compress(tfm, src, slen, dst, dlen);

See here.  crypto_comp_compress() is a symbolic dependency on CRYPTO

>> +		break;
>> +	case ZSWAP_COMPOP_DECOMPRESS:
>> +		ret = crypto_comp_decompress(tfm, src, slen, dst, dlen);

And here.

>> +		break;
>> +	default:
>> +		ret = -EINVAL;
>> +	}
>> +
>> +	put_cpu();
>> +	return ret;
>> +}
>> +
>> +static int __init zswap_comp_init(void)
>> +{
>> +	if (!crypto_has_comp(zswap_compressor, 0, 0)) {

And here.

>> +		pr_info("zswap: %s compressor not available\n",
>> +			zswap_compressor);
>> +		/* fall back to default compressor */
>> +		zswap_compressor = ZSWAP_COMPRESSOR_DEFAULT;
>> +		if (!crypto_has_comp(zswap_compressor, 0, 0))
>> +			/* can't even load the default compressor */
>> +			return -ENODEV;
>> +	}
>> +	pr_info("zswap: using %s compressor\n", zswap_compressor);
>> +
>> +	/* alloc percpu transforms */
>> +	zswap_comp_pcpu_tfms = alloc_percpu(struct crypto_comp *);
>> +	if (!zswap_comp_pcpu_tfms)
>> +		return -ENOMEM;
>> +	return 0;
>> +}
>> +
>> +static void zswap_comp_exit(void)
>> +{
>> +	/* free percpu transforms */
>> +	if (zswap_comp_pcpu_tfms)
>> +		free_percpu(zswap_comp_pcpu_tfms);
>> +}
>> +
>> +/*********************************
>> +* data structures
>> +**********************************/
>> +struct zswap_entry {
>> +	struct rb_node rbnode;
>> +	unsigned type;
>> +	pgoff_t offset;
>> +	unsigned long handle;
>> +	unsigned int length;
>> +};
>> +
>> +struct zswap_tree {
>> +	struct rb_root rbroot;
>> +	spinlock_t lock;
>> +	struct zs_pool *pool;
>> +};
>> +
>> +static struct zswap_tree *zswap_trees[MAX_SWAPFILES];
>> +
>> +/*********************************
>> +* zswap entry functions
>> +**********************************/
>> +#define ZSWAP_KMEM_CACHE_NAME "zswap_entry_cache"
>> +static struct kmem_cache *zswap_entry_cache;
>> +
>> +static inline int zswap_entry_cache_create(void)
>> +{
>> +	zswap_entry_cache =
>> +		kmem_cache_create(ZSWAP_KMEM_CACHE_NAME,
>> +			sizeof(struct zswap_entry), 0, 0, NULL);
>> +	return (zswap_entry_cache == NULL);
>> +}
>> +
>> +static inline void zswap_entry_cache_destory(void)
>> +{
>> +	kmem_cache_destroy(zswap_entry_cache);
>> +}
>> +
>> +static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
>> +{
>> +	struct zswap_entry *entry;
>> +	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
>> +	if (!entry)
>> +		return NULL;
>> +	return entry;
>> +}
>> +
>> +static inline void zswap_entry_cache_free(struct zswap_entry *entry)
>> +{
>> +	kmem_cache_free(zswap_entry_cache, entry);
>> +}
>> +
>> +/*********************************
>> +* rbtree functions
>> +**********************************/
>> +static struct zswap_entry *zswap_rb_search(struct rb_root *root, pgoff_t offset)
>> +{
>> +	struct rb_node *node = root->rb_node;
>> +	struct zswap_entry *entry;
>> +
>> +	while (node) {
>> +		entry = rb_entry(node, struct zswap_entry, rbnode);
>> +		if (entry->offset > offset)
>> +			node = node->rb_left;
>> +		else if (entry->offset < offset)
>> +			node = node->rb_right;
>> +		else
>> +			return entry;
>> +	}
>> +	return NULL;
>> +}
>> +
>> +/*
>> + * In the case that a entry with the same offset is found, it a pointer to
>> + * the existing entry is stored in dupentry and the function returns -EEXIST
>> +*/
>> +static int zswap_rb_insert(struct rb_root *root, struct zswap_entry *entry,
>> +			struct zswap_entry **dupentry)
>> +{
>> +	struct rb_node **link = &root->rb_node, *parent = NULL;
>> +	struct zswap_entry *myentry;
>> +
>> +	while (*link) {
>> +		parent = *link;
>> +		myentry = rb_entry(parent, struct zswap_entry, rbnode);
>> +		if (myentry->offset > entry->offset)
>> +			link = &(*link)->rb_left;
>> +		else if (myentry->offset < entry->offset)
>> +			link = &(*link)->rb_right;
>> +		else {
>> +			*dupentry = myentry;
>> +			return -EEXIST;
> 
> I sent a mail to frontswap guys to remove existing entry handling.
> The frontswap_store can be called on existing entry if reuse_swap_page
> encounters writeback-ing page. Let's see how it is going.

Thanks, It'd be nice to do away with this nastiness.

> 
>> +		}
>> +	}
>> +	rb_link_node(&entry->rbnode, parent, link);
>> +	rb_insert_color(&entry->rbnode, root);
>> +	return 0;
>> +}
>> +
>> +/*********************************
>> +* per-cpu code
>> +**********************************/
>> +static DEFINE_PER_CPU(u8 *, zswap_dstmem);
>> +
>> +static int __zswap_cpu_notifier(unsigned long action, unsigned long cpu)
>> +{
>> +	struct crypto_comp *tfm;
>> +	u8 *dst;
>> +
>> +	switch (action) {
>> +	case CPU_UP_PREPARE:
>> +		tfm = crypto_alloc_comp(zswap_compressor, 0, 0);
>> +		if (IS_ERR(tfm)) {
>> +			pr_err("zswap: can't allocate compressor transform\n");
>> +			return NOTIFY_BAD;
>> +		}
>> +		*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = tfm;
>> +		dst = (u8 *)__get_free_pages(GFP_KERNEL, 1);
>> +		if (!dst) {
>> +			pr_err("zswap: can't allocate compressor buffer\n");
>> +			crypto_free_comp(tfm);
>> +			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
>> +			return NOTIFY_BAD;
>> +		}
>> +		per_cpu(zswap_dstmem, cpu) = dst;
>> +		break;
>> +	case CPU_DEAD:
>> +	case CPU_UP_CANCELED:
>> +		tfm = *per_cpu_ptr(zswap_comp_pcpu_tfms, cpu);
>> +		if (tfm) {
>> +			crypto_free_comp(tfm);
>> +			*per_cpu_ptr(zswap_comp_pcpu_tfms, cpu) = NULL;
>> +		}
>> +		dst = per_cpu(zswap_dstmem, cpu);
>> +		if (dst) {
>> +			free_pages((unsigned long)dst, 1);
>> +			per_cpu(zswap_dstmem, cpu) = NULL;
>> +		}
>> +		break;
>> +	default:
>> +		break;
>> +	}
>> +	return NOTIFY_OK;
>> +}
>> +
>> +static int zswap_cpu_notifier(struct notifier_block *nb,
>> +				unsigned long action, void *pcpu)
>> +{
>> +	unsigned long cpu = (unsigned long)pcpu;
>> +	return __zswap_cpu_notifier(action, cpu);
>> +}
>> +
>> +static struct notifier_block zswap_cpu_notifier_block = {
>> +	.notifier_call = zswap_cpu_notifier
>> +};
>> +
>> +static int zswap_cpu_init(void)
>> +{
>> +	unsigned long cpu;
>> +
>> +	get_online_cpus();
>> +	for_each_online_cpu(cpu)
>> +		if (__zswap_cpu_notifier(CPU_UP_PREPARE, cpu) != NOTIFY_OK)
>> +			goto cleanup;
>> +	register_cpu_notifier(&zswap_cpu_notifier_block);
>> +	put_online_cpus();
>> +	return 0;
>> +
>> +cleanup:
>> +	for_each_online_cpu(cpu)
>> +		__zswap_cpu_notifier(CPU_UP_CANCELED, cpu);
>> +	put_online_cpus();
>> +	return -ENOMEM;
>> +}
>> +
>> +/*********************************
>> +* zsmalloc callbacks
>> +**********************************/
>> +static mempool_t *zswap_page_pool;
>> +
>> +static inline unsigned int zswap_max_pool_pages(void)
>> +{
>> +	return zswap_max_pool_percent * totalram_pages / 100;
>> +}
>> +
>> +static inline int zswap_page_pool_create(void)
>> +{
>> +	/* TODO: dynamically size mempool */
>> +	zswap_page_pool = mempool_create_page_pool(256, 0);
>> +	if (!zswap_page_pool)
>> +		return -ENOMEM;
>> +	return 0;
>> +}
>> +
>> +static inline void zswap_page_pool_destroy(void)
>> +{
>> +	mempool_destroy(zswap_page_pool);
>> +}
>> +
>> +static struct page *zswap_alloc_page(gfp_t flags)
>> +{
>> +	struct page *page;
>> +
>> +	if (atomic_read(&zswap_pool_pages) >= zswap_max_pool_pages()) {
>> +		zswap_pool_limit_hit++;
>> +		return NULL;
>> +	}
>> +	page = mempool_alloc(zswap_page_pool, flags);
>> +	if (page)
>> +		atomic_inc(&zswap_pool_pages);
>> +	return page;
>> +}
>> +
>> +static void zswap_free_page(struct page *page)
>> +{
>> +	if (!page)
>> +		return;
>> +	mempool_free(page, zswap_page_pool);
>> +	atomic_dec(&zswap_pool_pages);
>> +}
>> +
>> +static struct zs_ops zswap_zs_ops = {
>> +	.alloc = zswap_alloc_page,
>> +	.free = zswap_free_page
>> +};
>> +
>> +/*********************************
>> +* frontswap hooks
>> +**********************************/
>> +/* attempts to compress and store an single page */
>> +static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *page)
>> +{
>> +	struct zswap_tree *tree = zswap_trees[type];
>> +	struct zswap_entry *entry, *dupentry;
>> +	int ret;
>> +	unsigned int dlen = PAGE_SIZE;
>> +	unsigned long handle;
>> +	char *buf;
>> +	u8 *src, *dst;
>> +
>> +	if (!tree) {
> 
> We should fix frontswap_init can return error code instead of
> checking validation of init everytime.
> I will fix it after Konrad merged eariler frontswap patch for
> frontswap_init call out of swap_lock.

That would also be better.

> 
>> +		ret = -ENODEV;
>> +		goto reject;
>> +	}
>> +
>> +	/* compress */
>> +	dst = get_cpu_var(zswap_dstmem);
>> +	src = kmap_atomic(page);
>> +	ret = zswap_comp_op(ZSWAP_COMPOP_COMPRESS, src, PAGE_SIZE, dst, &dlen);
>> +	kunmap_atomic(src);
>> +	if (ret) {
>> +		ret = -EINVAL;
>> +		goto putcpu;
>> +	}
>> +	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
>> +		zswap_reject_compress_poor++;
>> +		ret = -E2BIG;
>> +		goto putcpu;
>> +	}
>> +
>> +	/* store */
>> +	handle = zs_malloc(tree->pool, dlen,
>> +		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
>> +			__GFP_NOWARN);
>> +	if (!handle) {
>> +		zswap_reject_zsmalloc_fail++;
>> +		ret = -ENOMEM;
>> +		goto putcpu;
>> +	}
>> +
>> +	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
>> +	memcpy(buf, dst, dlen);
>> +	zs_unmap_object(tree->pool, handle);
>> +	put_cpu_var(zswap_dstmem);
>> +
>> +	/* allocate entry */
>> +	entry = zswap_entry_cache_alloc(GFP_KERNEL);
>> +	if (!entry) {
>> +		zs_free(tree->pool, handle);
>> +		zswap_reject_kmemcache_fail++;
>> +		ret = -ENOMEM;
>> +		goto reject;
>> +	}
>> +
>> +	/* populate entry */
>> +	entry->type = type;
>> +	entry->offset = offset;
>> +	entry->handle = handle;
>> +	entry->length = dlen;
>> +
>> +	/* map */
>> +	spin_lock(&tree->lock);
>> +	do {
>> +		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
>> +		if (ret == -EEXIST) {
>> +			zswap_duplicate_entry++;
>> +
>> +			/* remove from rbtree */
>> +			rb_erase(&dupentry->rbnode, &tree->rbroot);
>> +			
> 
> While spaces damaged.

Doh. Will fix.

> Will continue to review tomorrow.

Thanks!

Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 3/7] zswap: add to mm/
  2013-01-31 19:06     ` Seth Jennings
@ 2013-01-31 20:07       ` Robert Jennings
  2013-02-01  2:38       ` Minchan Kim
  1 sibling, 0 replies; 34+ messages in thread
From: Robert Jennings @ 2013-01-31 20:07 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Minchan Kim, Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Jenifer Hopper,
	Mel Gorman, Johannes Weiner, Rik van Riel, Larry Woodman,
	Benjamin Herrenschmidt, Dave Hansen, linux-mm, linux-kernel,
	devel

* Seth Jennings (sjenning@linux.vnet.ibm.com) wrote:
> On 01/31/2013 01:07 AM, Minchan Kim wrote:
> > On Tue, Jan 29, 2013 at 03:40:23PM -0600, Seth Jennings wrote:
> >> zswap is a thin compression backend for frontswap. It receives
> >> pages from frontswap and attempts to store them in a compressed
> >> memory pool, resulting in an effective partial memory reclaim and
> >> dramatically reduced swap device I/O.
> >>
> >> Additionally, in most cases, pages can be retrieved from this
> >> compressed store much more quickly than reading from tradition
> >> swap devices resulting in faster performance for many workloads.
> >>
> >> This patch adds the zswap driver to mm/
> >>
> >> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> >> ---
> >>  mm/Kconfig  |  15 ++
> >>  mm/Makefile |   1 +
> >>  mm/zswap.c  | 656 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>  3 files changed, 672 insertions(+)
> >>  create mode 100644 mm/zswap.c
> >>
> >> diff --git a/mm/Kconfig b/mm/Kconfig
> >> index 278e3ab..14b9acb 100644
> >> --- a/mm/Kconfig
> >> +++ b/mm/Kconfig
> >> @@ -446,3 +446,18 @@ config FRONTSWAP
> >>  	  and swap data is stored as normal on the matching swap device.
> >>  
> >>  	  If unsure, say Y to enable frontswap.
> >> +
> >> +config ZSWAP
> >> +	bool "In-kernel swap page compression"
> >> +	depends on FRONTSWAP && CRYPTO
> >> +	select CRYPTO_LZO
> >> +	select ZSMALLOC
> > 
> > Again, I'm asking why zswap should have a dependent on CRPYTO?
> > Couldn't we support it as a option? I'd like to use zswap without CRYPTO
> > like zram.
> 
> The reason we need CRYPTO is that zswap uses it to support a pluggable
> compression model.  zswap can use any compressor that has a crypto API
> driver.  zswap has _symbol dependencies_ on CRYPTO.  If it isn't
> selected, the build breaks.

And we went with a pluggable model so that we could support hardware
accelerated compression engines like:

0e16aaf powerpc/crypto: add 842 hardware compression driver

--Rob Jennings


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
                   ` (7 preceding siblings ...)
  2013-01-29 22:14 ` [PATCHv4 0/7] zswap: compressed swap caching Joe Perches
@ 2013-02-01  1:39 ` Simon Jeons
  2013-02-01 15:13   ` Seth Jennings
       [not found] ` <5110287A.5050200@linux.vnet.ibm.com>
  9 siblings, 1 reply; 34+ messages in thread
From: Simon Jeons @ 2013-02-01  1:39 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

Hi Seth,
On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
> Sorry for the churn but just this set might be easier to review.
> The code required for the flushing is in a separate patch now
> as requested.
> 
> Changelog:
> 
> v4:
> * Added Acks (Minchan)
> * Separated flushing functionality into standalone patch
>   for easier review (Minchan)
> * fix comment on zswap enabled attribute (Minchan)
> * add TODO for dynamic mempool size (Minchan)
> * and check for NULL in zswap_free_page() (Minchan)
> * add missing zs_free() in error path (Minchan)
> * TODO: add comments for flushing/refcounting (Minchan)
> 
> NOTE: To build, read this:
> http://lkml.org/lkml/2013/1/28/586
> 
> v3:
> * Dropped the zsmalloc patches from the set, except the promotion patch
>   which has be converted to a rename patch (vs full diff).  The dropped
>   patches have been Acked and are going into Greg's staging tree soon.
> * Separated [PATCHv2 7/9] into two patches since it makes changes for two
>   different reasons (Minchan)
> * Moved ZSWAP_MAX_OUTSTANDING_FLUSHES near the top in zswap.c (Rik)
> * Rebase to v3.8-rc5. linux-next is a little volatile with the
>   swapper_space per type changes which will effect this patchset.
> * TODO: Move some stats from debugfs to sysfs. Which ones? (Rik)
> 
> v2:
> * Rename zswap_fs_* functions to zswap_frontswap_* to avoid
>   confusion with "filesystem"
> * Add comment about what the tree lock protects
> * Remove "#if 0" code (should have been done before)
> * Break out changes to existing swap code into separate patch
> * Fix blank line EOF warning on documentation file
> * Rebase to next-20130107
> 
> Zswap Overview:
> 
> Zswap is a lightweight compressed cache for swap pages. It takes
> pages that are in the process of being swapped out and attempts to
> compress them into a dynamically allocated RAM-based memory pool.
> If this process is successful, the writeback to the swap device is
> deferred and, in many cases, avoided completely.  This results in
> a significant I/O reduction and performance gains for systems that
> are swapping.
> 
> The results of a kernel building benchmark indicate a
> runtime reduction of 53% and an I/O reduction 76% with zswap vs normal
> swapping with a kernel build under heavy memory pressure (see
> Performance section for more).
> 
> Some addition performance metrics regarding the performance
> improvements and I/O reductions that can be achieved using zswap as
> measured by SPECjbb are provided here:
> 
> http://ibm.co/VCgHvM
> 
> These results include runs on x86 and new results on Power7+ with
> hardware compression acceleration.
> 
> Of particular note is that zswap is able to evict pages from the compressed
> cache, on an LRU basis, to the backing swap device when the compressed pool
> reaches it size limit or the pool is unable to obtain additional pages
> from the buddy allocator.  This eviction functionality had been identified
> as a requirement in prior community discussions.
> 
> Patchset Structure:
> 1:   add atomic_t get/set to debugfs
> 2:   promote zsmalloc to /lib
> 3,4: changes to existing swap code for zswap
> 5,6: add zswap and documentation
> 
> Rationale:
> 
> Zswap provides compressed swap caching that basically trades CPU cycles
> for reduced swap I/O.  This trade-off can result in a significant
> performance improvement as reads to/writes from to the compressed
> cache almost always faster that reading from a swap device
> which incurs the latency of an asynchronous block I/O read.
> 
> Some potential benefits:
> * Desktop/laptop users with limited RAM capacities can mitigate the
>     performance impact of swapping.
> * Overcommitted guests that share a common I/O resource can
>     dramatically reduce their swap I/O pressure, avoiding heavy
>     handed I/O throttling by the hypervisor.  This allows more work
>     to get done with less impact to the guest workload and guests
>     sharing the I/O subsystem
> * Users with SSDs as swap devices can extend the life of the device by
>     drastically reducing life-shortening writes.
> 
> Compressed swap is also provided in zcache, along with page cache
> compression and RAM clustering through RAMSter. Zswap seeks to deliver
> the benefit of swap  compression to users in a discrete function.
> This design decision is akin to Unix design philosophy of doing one
> thing well, it leaves file cache compression and other features
> for separate code.
> 
> Design:
> 
> Zswap receives pages for compression through the Frontswap API and
> is able to evict pages from its own compressed pool on an LRU basis
> and write them back to the backing swap device in the case that the
> compressed pool is full or unable to secure additional pages from
> the buddy allocator.
> 
> Zswap makes use of zsmalloc for the managing the compressed memory
> pool.  This is because zsmalloc is specifically designed to minimize
> fragmentation on large (> PAGE_SIZE/2) allocation sizes.  Each
> allocation in zsmalloc is not directly accessible by address.
> Rather, a handle is return by the allocation routine and that handle
> must be mapped before being accessed.  The compressed memory pool grows
> on demand and shrinks as compressed pages are freed.  The pool is
> not preallocated.
> 
> When a swap page is passed from frontswap to zswap, zswap maintains
> a mapping of the swap entry, a combination of the swap type and swap
> offset, to the zsmalloc handle that references that compressed swap
> page.  This mapping is achieved with a red-black tree per swap type.
> The swap offset is the search key for the tree nodes.
> 
> Zswap seeks to be simple in its policies.  Sysfs attributes allow for
> two user controlled policies:
> * max_compression_ratio - Maximum compression ratio, as as percentage,
>     for an acceptable compressed page. Any page that does not compress
>     by at least this ratio will be rejected.
> * max_pool_percent - The maximum percentage of memory that the compressed
>     pool can occupy.
> 
> To enabled zswap, the "enabled" attribute must be set to 1 at boot time.
> 
> Zswap allows the compressor to be selected at kernel boot time by
> setting the “compressor” attribute.  The default compressor is lzo.
> 
> A debugfs interface is provided for various statistic about pool size,
> number of pages stored, and various counters for the reasons pages
> are rejected.
> 
> Performance, Kernel Building:
> 
> Setup
> ========
> Gentoo w/ kernel v3.7-rc7
> Quad-core i5-2500 @ 3.3GHz
> 512MB DDR3 1600MHz (limited with mem=512m on boot)
> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
> majflt are major page faults reported by the time command
> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
> the make -jN
> 
> Summary
> ========
> * Zswap reduces I/O and improves performance at all swap pressure levels.
> 
> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
>   over 1.5GB of I/O, and cut runtime in half.

How to get your benchmark?

> 
> Details
> ========
> I/O (in pages)
> 	base				zswap				change	change
> N	pswpin	pswpout	majflt	I/O sum	pswpin	pswpout	majflt	I/O sum	%I/O	MB
> 8	1	335	291	627	0	0	249	249	-60%	1
> 12	3688	14315	5290	23293	123	860	5954	6937	-70%	64
> 16	12711	46179	16803	75693	2936	7390	46092	56418	-25%	75
> 20	42178	133781	49898	225857	9460	28382	92951	130793	-42%	371
> 24	96079	357280	105242	558601	7719	18484	109309	135512	-76%	1653
> 
> Runtime (in seconds)
> N	base	zswap	%change
> 8	107	107	0%
> 12	128	110	-14%
> 16	191	179	-6%
> 20	371	240	-35%
> 24	570	267	-53%
> 
> %CPU utilization (out of 400% on 4 cpus)
> N	base	zswap	%change
> 8	317	319	1%
> 12	267	311	16%
> 16	179	191	7%
> 20	94	143	52%
> 24	60	128	113%
> 
> 
> Seth Jennings (7):
>   debugfs: add get/set for atomic types
>   zsmalloc: promote to lib/
>   zswap: add to mm/
>   mm: break up swap_writepage() for frontswap backends
>   mm: allow for outstanding swap writeback accounting
>   zswap: add flushing support
>   zswap: add documentation
> 
>  Documentation/vm/zswap.txt                         |   73 ++
>  drivers/staging/Kconfig                            |    2 -
>  drivers/staging/Makefile                           |    1 -
>  drivers/staging/zcache/zcache-main.c               |    3 +-
>  drivers/staging/zram/zram_drv.h                    |    3 +-
>  drivers/staging/zsmalloc/Kconfig                   |   10 -
>  drivers/staging/zsmalloc/Makefile                  |    3 -
>  fs/debugfs/file.c                                  |   42 +
>  include/linux/debugfs.h                            |    2 +
>  include/linux/swap.h                               |    4 +
>  .../staging/zsmalloc => include/linux}/zsmalloc.h  |    0
>  lib/Kconfig                                        |   18 +
>  lib/Makefile                                       |    1 +
>  .../zsmalloc/zsmalloc-main.c => lib/zsmalloc.c     |    3 +-
>  mm/Kconfig                                         |   15 +
>  mm/Makefile                                        |    1 +
>  mm/page_io.c                                       |   22 +-
>  mm/swap_state.c                                    |    2 +-
>  mm/zswap.c                                         | 1073 ++++++++++++++++++++
>  19 files changed, 1250 insertions(+), 28 deletions(-)
>  create mode 100644 Documentation/vm/zswap.txt
>  delete mode 100644 drivers/staging/zsmalloc/Kconfig
>  delete mode 100644 drivers/staging/zsmalloc/Makefile
>  rename {drivers/staging/zsmalloc => include/linux}/zsmalloc.h (100%)
>  rename drivers/staging/zsmalloc/zsmalloc-main.c => lib/zsmalloc.c (99%)
>  create mode 100644 mm/zswap.c
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 3/7] zswap: add to mm/
  2013-01-31 19:06     ` Seth Jennings
  2013-01-31 20:07       ` Robert Jennings
@ 2013-02-01  2:38       ` Minchan Kim
  2013-02-01 15:31         ` Seth Jennings
  1 sibling, 1 reply; 34+ messages in thread
From: Minchan Kim @ 2013-02-01  2:38 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Thu, Jan 31, 2013 at 01:06:46PM -0600, Seth Jennings wrote:
> On 01/31/2013 01:07 AM, Minchan Kim wrote:
> > On Tue, Jan 29, 2013 at 03:40:23PM -0600, Seth Jennings wrote:
> >> zswap is a thin compression backend for frontswap. It receives
> >> pages from frontswap and attempts to store them in a compressed
> >> memory pool, resulting in an effective partial memory reclaim and
> >> dramatically reduced swap device I/O.
> >>
> >> Additionally, in most cases, pages can be retrieved from this
> >> compressed store much more quickly than reading from tradition
> >> swap devices resulting in faster performance for many workloads.
> >>
> >> This patch adds the zswap driver to mm/
> >>
> >> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> >> ---
> >>  mm/Kconfig  |  15 ++
> >>  mm/Makefile |   1 +
> >>  mm/zswap.c  | 656 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >>  3 files changed, 672 insertions(+)
> >>  create mode 100644 mm/zswap.c
> >>
> >> diff --git a/mm/Kconfig b/mm/Kconfig
> >> index 278e3ab..14b9acb 100644
> >> --- a/mm/Kconfig
> >> +++ b/mm/Kconfig
> >> @@ -446,3 +446,18 @@ config FRONTSWAP
> >>  	  and swap data is stored as normal on the matching swap device.
> >>  
> >>  	  If unsure, say Y to enable frontswap.
> >> +
> >> +config ZSWAP
> >> +	bool "In-kernel swap page compression"
> >> +	depends on FRONTSWAP && CRYPTO
> >> +	select CRYPTO_LZO
> >> +	select ZSMALLOC
> > 
> > Again, I'm asking why zswap should have a dependent on CRPYTO?
> > Couldn't we support it as a option? I'd like to use zswap without CRYPTO
> > like zram.
> 
> The reason we need CRYPTO is that zswap uses it to support a pluggable
> compression model.  zswap can use any compressor that has a crypto API
> driver.  zswap has _symbol dependencies_ on CRYPTO.  If it isn't
> selected, the build breaks.

I think we can factor out compressoin part and remove dependency
at compile time by Kconfig. No? Of course, if we disable CRYPTO in Kconfig,
we lost pluggable model but not a problem for embedded system.
Anyway, If it's a burden for you at a moment, I'm not going to insist on it.
Will do it for myself.

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 6/7] zswap: add flushing support
  2013-01-29 21:40 ` [PATCHv4 6/7] zswap: add flushing support Seth Jennings
  2013-01-29 23:03   ` Andrew Morton
@ 2013-02-01  7:27   ` Minchan Kim
  2013-02-13  6:24     ` Seth Jennings
  1 sibling, 1 reply; 34+ messages in thread
From: Minchan Kim @ 2013-02-01  7:27 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Tue, Jan 29, 2013 at 03:40:26PM -0600, Seth Jennings wrote:
> This patchset adds support for flush pages out of the compressed
> pool to the swap device
> 

I know you don't have a enough time since you sent previous patch.
Please add lots of words next time.

1. advertise "awesome feature", which igrate from zswap to real swap device
2. What's the tmppage?
3. frontswap_load/store/invalidate/flush race and object life time
4. policy to reclaim.

> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
> ---
>  mm/zswap.c | 451 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 434 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a6c2928..b8e5673 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -34,6 +34,12 @@
>  #include <linux/mempool.h>
>  #include <linux/zsmalloc.h>
>  
> +#include <linux/mm_types.h>
> +#include <linux/page-flags.h>
> +#include <linux/swapops.h>
> +#include <linux/writeback.h>
> +#include <linux/pagemap.h>
> +
>  /*********************************
>  * statistics
>  **********************************/
> @@ -41,6 +47,8 @@
>  static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
>  /* The number of compressed pages currently stored in zswap */
>  static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
> +/* The number of outstanding pages awaiting writeback */
> +static atomic_t zswap_outstanding_flushes = ATOMIC_INIT(0);
>  
>  /*
>   * The statistics below are not protected from concurrent access for
> @@ -49,9 +57,14 @@ static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
>   * certain event is occurring.
>  */
>  static u64 zswap_pool_limit_hit;
> +static u64 zswap_flushed_pages;
>  static u64 zswap_reject_compress_poor;
> +static u64 zswap_flush_attempted;
> +static u64 zswap_reject_tmppage_fail;

What's does it mean? Hmm, I looked at the code.
It means fail to allocator for tmppage.

How about "zswap_tmppage_alloc_fail"?

> +static u64 zswap_reject_flush_fail;

I am confused, too so looked at code but fail to find usecase.
Remove?

>  static u64 zswap_reject_zsmalloc_fail;

We can remove this because get it by (zswap_flush_attempted - zswap_saved_by_flush)

>  static u64 zswap_reject_kmemcache_fail;
> +static u64 zswap_saved_by_flush;

I can't think better naming but need a comment at a minimum.

>  static u64 zswap_duplicate_entry;
>  
>  /*********************************
> @@ -80,6 +93,14 @@ static unsigned int zswap_max_compression_ratio = 80;
>  module_param_named(max_compression_ratio,
>  			zswap_max_compression_ratio, uint, 0644);
>  
> +/*
> + * Maximum number of outstanding flushes allowed at any given time.
> + * This is to prevent decompressing an unbounded number of compressed
> + * pages into the swap cache all at once, and to help with writeback
> + * congestion.
> +*/
> +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
> +
>  /*********************************
>  * compression functions
>  **********************************/
> @@ -145,14 +166,23 @@ static void zswap_comp_exit(void)
>  **********************************/

Please introduce why we need LRU and refcount.

>  struct zswap_entry {
>  	struct rb_node rbnode;
> +	struct list_head lru;
> +	int refcount;

Just nit,
I understand why you don't use atomic but it would be nice to
write down it in description of patch for preventing unnecessary
arguing.

>  	unsigned type;
>  	pgoff_t offset;
>  	unsigned long handle;
>  	unsigned int length;
>  };
>  
> +/*
> + * The tree lock in the zswap_tree struct protects a few things:
> + * - the rbtree
> + * - the lru list
> + * - the refcount field of each entry in the tree
> + */
>  struct zswap_tree {
>  	struct rb_root rbroot;
> +	struct list_head lru;
>  	spinlock_t lock;
>  	struct zs_pool *pool;
>  };
> @@ -184,6 +214,8 @@ static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
>  	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
>  	if (!entry)
>  		return NULL;
> +	INIT_LIST_HEAD(&entry->lru);
> +	entry->refcount = 1;
>  	return entry;
>  }
>  
> @@ -192,6 +224,17 @@ static inline void zswap_entry_cache_free(struct zswap_entry *entry)
>  	kmem_cache_free(zswap_entry_cache, entry);
>  }
>  
> +static inline void zswap_entry_get(struct zswap_entry *entry)
> +{
> +	entry->refcount++;
> +}
> +
> +static inline int zswap_entry_put(struct zswap_entry *entry)
> +{
> +	entry->refcount--;
> +	return entry->refcount;
> +}
> +
>  /*********************************
>  * rbtree functions
>  **********************************/
> @@ -367,6 +410,278 @@ static struct zs_ops zswap_zs_ops = {
>  };
>  
>  /*********************************
> +* flush code
> +**********************************/

As Andrew pointed out, let's change the naming.
I'd like to imply "send back to original seat" instead of flush because
the page is just intercepted by frontswap on the way.
More thining about it, current implentation go with it but we can
add indirection layer of swap so frontswap page could be migrated other
swap slot to enhance swapout bandwidth so "migrate the swap page to elsewhere"
is more proper. I'd like to depend on native speakers. :)

> +static void zswap_end_swap_write(struct bio *bio, int err)
> +{
> +	end_swap_bio_write(bio, err);
> +	atomic_dec(&zswap_outstanding_flushes);
> +	zswap_flushed_pages++;
> +}
> +
> +/*
> + * zswap_get_swap_cache_page
> + *
> + * This is an adaption of read_swap_cache_async()

Please write down what's this function's goal and why we need it
if need because of function's complexity.

> + *
> + * If success, page is returned in retpage
> + * Returns 0 if page was already in the swap cache, page is not locked
> + * Returns 1 if the new page needs to be populated, page is locked

      Return -ENOMEM if the allocation is failed.

> + */
> +static int zswap_get_swap_cache_page(swp_entry_t entry,
> +				struct page **retpage)
> +{
> +	struct page *found_page, *new_page = NULL;
> +	int err;
> +
> +	*retpage = NULL;
> +	do {
> +		/*
> +		 * First check the swap cache.  Since this is normally
> +		 * called after lookup_swap_cache() failed, re-calling
> +		 * that would confuse statistics.
> +		 */
> +		found_page = find_get_page(&swapper_space, entry.val);
> +		if (found_page)
> +			break;
> +
> +		/*
> +		 * Get a new page to read into from swap.
> +		 */
> +		if (!new_page) {
> +			new_page = alloc_page(GFP_KERNEL);
> +			if (!new_page)
> +				break; /* Out of memory */
> +		}
> +
> +		/*
> +		 * call radix_tree_preload() while we can wait.
> +		 */
> +		err = radix_tree_preload(GFP_KERNEL);
> +		if (err)
> +			break;
> +
> +		/*
> +		 * Swap entry may have been freed since our caller observed it.
> +		 */
> +		err = swapcache_prepare(entry);
> +		if (err == -EEXIST) { /* seems racy */
> +			radix_tree_preload_end();
> +			continue;
> +		}
> +		if (err) { /* swp entry is obsolete ? */
> +			radix_tree_preload_end();
> +			break;
> +		}
> +
> +		/* May fail (-ENOMEM) if radix-tree node allocation failed. */
> +		__set_page_locked(new_page);
> +		SetPageSwapBacked(new_page);
> +		err = __add_to_swap_cache(new_page, entry);
> +		if (likely(!err)) {
> +			radix_tree_preload_end();
> +			lru_cache_add_anon(new_page);
> +			*retpage = new_page;
> +			return 1;
> +		}
> +		radix_tree_preload_end();
> +		ClearPageSwapBacked(new_page);
> +		__clear_page_locked(new_page);
> +		/*
> +		 * add_to_swap_cache() doesn't return -EEXIST, so we can safely
> +		 * clear SWAP_HAS_CACHE flag.
> +		 */
> +		swapcache_free(entry, NULL);
> +	} while (err != -ENOMEM);
> +
> +	if (new_page)
> +		page_cache_release(new_page);
> +	if (!found_page)
> +		return -ENOMEM;
> +	*retpage = found_page;
> +	return 0;
> +}
> +
> +static int zswap_flush_entry(struct zswap_entry *entry)
> +{
> +	unsigned long type = entry->type;
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct page *page;
> +	swp_entry_t swpentry;
> +	u8 *src, *dst;
> +	unsigned int dlen;
> +	int ret, refcount;
> +	struct writeback_control wbc = {
> +		.sync_mode = WB_SYNC_NONE,
> +	};
> +
> +	/* get/allocate page in the swap cache */
> +	swpentry = swp_entry(type, entry->offset);
> +	ret = zswap_get_swap_cache_page(swpentry, &page);

IMHO, zswap_get_swap_cache_page have to ruturn meaningful enum type
so we have to use switch instead of if-else series.

> +	if (ret < 0)
> +		return ret;
> +	else if (ret) {
> +		/* decompress */
> +		dlen = PAGE_SIZE;
> +		src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
> +		dst = kmap_atomic(page);
> +		ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
> +				dst, &dlen);
> +		kunmap_atomic(dst);
> +		zs_unmap_object(tree->pool, entry->handle);
> +		BUG_ON(ret);
> +		BUG_ON(dlen != PAGE_SIZE);
> +		SetPageUptodate(page);
> +	} else {
> +		/* page is already in the swap cache, ignore for now */
> +		spin_lock(&tree->lock);
> +		refcount = zswap_entry_put(entry);
> +		spin_unlock(&tree->lock);
> +
> +		if (likely(refcount))
> +			return 0;

It doesn't mean we are writing out and migrated so it would be better
to return non-zero.

> +
> +		/* if the refcount is zero, invalidate must have come in */
> +		/* free */
> +		zs_free(tree->pool, entry->handle);
> +		zswap_entry_cache_free(entry);
> +		atomic_dec(&zswap_stored_pages);

It would be better to factor out these functions.

void free_or_destroy_or_something_zswap_entry(...)
{
        zs_free
        zswap_entry_cache_free
        atomic_dec
}

> +
> +		return 0;
> +	}
> +
> +	/* start writeback */
> +	SetPageReclaim(page);
> +	/*
> +	 * Return value is ignored here because it doesn't change anything
> +	 * for us.  Page is returned unlocked.
> +	 */
> +	(void)__swap_writepage(page, &wbc, zswap_end_swap_write);
> +	page_cache_release(page);
> +	atomic_inc(&zswap_outstanding_flushes);
> +
> +	/* remove */
> +	spin_lock(&tree->lock);
> +	refcount = zswap_entry_put(entry);
> +	if (refcount > 1) {

Let's be a kind.
                /* If the race happens with zswap_frontswap_load,
                 * we move the zswap_entry into tail again because
                 * it used recenlty so there is no point to remove
                 * it in memory.
                 * It could be duplicated copy between in-memory and
                 * real swap device but no biggie.
                 */

> +		/* load in progress, load will free */
> +		spin_unlock(&tree->lock);
> +		return 0;
> +	}
> +	if (refcount == 1)
> +		/* no invalidate yet, remove from rbtree */
> +		rb_erase(&entry->rbnode, &tree->rbroot);
> +	spin_unlock(&tree->lock);
> +
> +	/* free */
> +	zs_free(tree->pool, entry->handle);
> +	zswap_entry_cache_free(entry);
> +	atomic_dec(&zswap_stored_pages);
> +
> +	return 0;
> +}
> +
> +static void zswap_flush_entries(unsigned type, int nr)

Just nit,
It would be useful to return the number of migrated pages

> +{
> +	struct zswap_tree *tree = zswap_trees[type];
> +	struct zswap_entry *entry;
> +	int i, ret;
> +
> +/*
> + * This limits is arbitrary for now until a better
> + * policy can be implemented. This is so we don't
> + * eat all of RAM decompressing pages for writeback.
> + */

Indent.

> +	if (atomic_read(&zswap_outstanding_flushes) >
> +		ZSWAP_MAX_OUTSTANDING_FLUSHES)
> +		return;
> +
> +	for (i = 0; i < nr; i++) {

As other people already pointed out, LRU reclaim could be a
problem because of spreading object across several slab.
Just an idea. I'd like to reclaim per SLAB instead of object.
For it, we have to add some age facility into zsmalloc, could
be enalbed optionally by Kconfig.

The zsmalloc gives an age every allocated object.
For example, we allocate some object following as

Object allocation orer : O(A), O(B), O(C), O(D), O(E)

so lru ordering is same.

LRU ordering : head-O(A)->O(B)->O(C)->O(D)->O(E)->tail
(tail is recent used object)

The zsmalloc could give an age following as,

O(A,1), O(B,2), O(C,3), O(D,4), O(E, 5)

They are located in following as.

     zspage 1:4         zspage 2:5          zspage 3:6
| O(A, 1), O(C, 3) | -- | O(E, 5) | -- | O(B, 2), O(D, 4) |


When zswap try to reclaim, it needs smallest age's zspage
so it is zspage 1 as age 4.
It is likely to include less recently used object and we surely get a free page
for us.

> +		/* dequeue from lru */
> +		spin_lock(&tree->lock);
> +		if (list_empty(&tree->lru)) {
> +			spin_unlock(&tree->lock);
> +			break;
> +		}
> +		entry = list_first_entry(&tree->lru,
> +				struct zswap_entry, lru);
> +		list_del(&entry->lru);

Use list_del_init and list_empty.

> +		zswap_entry_get(entry);

When do you decrease ref count if zswap_flush_entry fails by alloc fail?

> +		spin_unlock(&tree->lock);
> +		ret = zswap_flush_entry(entry);
> +		if (ret) {
> +			/* put back on the lru */
> +			spin_lock(&tree->lock);
> +			list_add(&entry->lru, &tree->lru);
> +			spin_unlock(&tree->lock);
> +		} else {
> +			if (atomic_read(&zswap_outstanding_flushes) >
> +				ZSWAP_MAX_OUTSTANDING_FLUSHES)
> +				break;
> +		}
> +	}
> +}
> +
> +/*******************************************
> +* page pool for temporary compression result
> +********************************************/
> +#define ZSWAP_TMPPAGE_POOL_PAGES 16
> +static LIST_HEAD(zswap_tmppage_list);
> +static DEFINE_SPINLOCK(zswap_tmppage_lock);
> +
> +static void zswap_tmppage_pool_destroy(void)
> +{
> +	struct page *page, *tmppage;
> +
> +	spin_lock(&zswap_tmppage_lock);
> +	list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) {
> +		list_del(&page->lru);
> +		__free_pages(page, 1);
> +	}
> +	spin_unlock(&zswap_tmppage_lock);
> +}
> +
> +static int zswap_tmppage_pool_create(void)
> +{
> +	int i;
> +	struct page *page;
> +
> +	for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) {
> +		page = alloc_pages(GFP_KERNEL, 1);
> +		if (!page) {
> +			zswap_tmppage_pool_destroy();
> +			return -ENOMEM;
> +		}
> +		spin_lock(&zswap_tmppage_lock);
> +		list_add(&page->lru, &zswap_tmppage_list);
> +		spin_unlock(&zswap_tmppage_lock);
> +	}
> +	return 0;
> +}
> +
> +static inline struct page *zswap_tmppage_alloc(void)
> +{
> +	struct page *page;
> +
> +	spin_lock(&zswap_tmppage_lock);
> +	if (list_empty(&zswap_tmppage_list)) {
> +		spin_unlock(&zswap_tmppage_lock);
> +		return NULL;
> +	}
> +	page = list_first_entry(&zswap_tmppage_list, struct page, lru);
> +	list_del(&page->lru);
> +	spin_unlock(&zswap_tmppage_lock);
> +	return page;
> +}
> +
> +static inline void zswap_tmppage_free(struct page *page)
> +{
> +	spin_lock(&zswap_tmppage_lock);
> +	list_add(&page->lru, &zswap_tmppage_list);
> +	spin_unlock(&zswap_tmppage_lock);
> +}
> +
> +/*********************************
>  * frontswap hooks
>  **********************************/
>  /* attempts to compress and store an single page */
> @@ -378,7 +693,9 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>  	unsigned int dlen = PAGE_SIZE;
>  	unsigned long handle;
>  	char *buf;
> -	u8 *src, *dst;
> +	u8 *src, *dst, *tmpdst;
> +	struct page *tmppage;
> +	bool flush_attempted = 0;
>  
>  	if (!tree) {
>  		ret = -ENODEV;
> @@ -392,12 +709,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>  	kunmap_atomic(src);
>  	if (ret) {
>  		ret = -EINVAL;
> -		goto putcpu;
> +		goto freepage;
>  	}
>  	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
>  		zswap_reject_compress_poor++;
>  		ret = -E2BIG;
> -		goto putcpu;
> +		goto freepage;
>  	}
>  
>  	/* store */
> @@ -405,15 +722,46 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>  		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
>  			__GFP_NOWARN);
>  	if (!handle) {
> -		zswap_reject_zsmalloc_fail++;
> -		ret = -ENOMEM;
> -		goto putcpu;
> +		zswap_flush_attempted++;
> +		/*
> +		 * Copy compressed buffer out of per-cpu storage so
> +		 * we can re-enable preemption.
> +		*/
> +		tmppage = zswap_tmppage_alloc();
> +		if (!tmppage) {
> +			zswap_reject_tmppage_fail++;
> +			ret = -ENOMEM;
> +			goto freepage;
> +		}
> +		flush_attempted = 1;
> +		tmpdst = page_address(tmppage);
> +		memcpy(tmpdst, dst, dlen);
> +		dst = tmpdst;
> +		put_cpu_var(zswap_dstmem);

Just use copy for enabling preemption? I don't have a good idea but surely
we could have better approach at the end. Let's think about it.

> +
> +		/* try to free up some space */
> +		/* TODO: replace with more targeted policy */
> +		zswap_flush_entries(type, 16);
> +		/* try again, allowing wait */
> +		handle = zs_malloc(tree->pool, dlen,
> +			__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
> +				__GFP_NOWARN);
> +		if (!handle) {
> +			/* still no space, fail */
> +			zswap_reject_zsmalloc_fail++;
> +			ret = -ENOMEM;
> +			goto freepage;
> +		}
> +		zswap_saved_by_flush++;
>  	}
>  
>  	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
>  	memcpy(buf, dst, dlen);
>  	zs_unmap_object(tree->pool, handle);
> -	put_cpu_var(zswap_dstmem);
> +	if (flush_attempted)
> +		zswap_tmppage_free(tmppage);
> +	else
> +		put_cpu_var(zswap_dstmem);
>  
>  	/* allocate entry */
>  	entry = zswap_entry_cache_alloc(GFP_KERNEL);
> @@ -436,16 +784,19 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>  		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
>  		if (ret == -EEXIST) {
>  			zswap_duplicate_entry++;
> -
> -			/* remove from rbtree */
> +			/* remove from rbtree and lru */
>  			rb_erase(&dupentry->rbnode, &tree->rbroot);
> -			
> -			/* free */
> -			zs_free(tree->pool, dupentry->handle);
> -			zswap_entry_cache_free(dupentry);
> -			atomic_dec(&zswap_stored_pages);
> +			if (dupentry->lru.next != LIST_POISON1)
> +				list_del(&dupentry->lru);
> +			if (!zswap_entry_put(dupentry)) {
> +				/* free */
> +				zs_free(tree->pool, dupentry->handle);
> +				zswap_entry_cache_free(dupentry);
> +				atomic_dec(&zswap_stored_pages);
> +			}
>  		}
>  	} while (ret == -EEXIST);
> +	list_add_tail(&entry->lru, &tree->lru);
>  	spin_unlock(&tree->lock);
>  
>  	/* update stats */
> @@ -453,8 +804,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>  
>  	return 0;
>  
> -putcpu:
> -	put_cpu_var(zswap_dstmem);
> +freepage:
> +	if (flush_attempted)
> +		zswap_tmppage_free(tmppage);
> +	else
> +		put_cpu_var(zswap_dstmem);
>  reject:
>  	return ret;
>  }
> @@ -469,10 +823,21 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page
>  	struct zswap_entry *entry;
>  	u8 *src, *dst;
>  	unsigned int dlen;
> +	int refcount;
>  
>  	/* find */
>  	spin_lock(&tree->lock);
>  	entry = zswap_rb_search(&tree->rbroot, offset);
> +	if (!entry) {
> +		/* entry was flushed */
> +		spin_unlock(&tree->lock);
> +		return -1;
> +	}

Let's be a kind.

        /* flusher/invalidate can destroy entry under us */
> +	zswap_entry_get(entry);
> +
> +	/* remove from lru */
> +	if (entry->lru.next != LIST_POISON1)
> +		list_del(&entry->lru);

        if (!list_empty(&entry->lru))
                list_del_init(&entry->lru);

>  	spin_unlock(&tree->lock);
>  
>  	/* decompress */
> @@ -484,6 +849,25 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page
>  	kunmap_atomic(dst);
>  	zs_unmap_object(tree->pool, entry->handle);
>  
> +	spin_lock(&tree->lock);
> +	refcount = zswap_entry_put(entry);
> +	if (likely(refcount)) {
> +		list_add_tail(&entry->lru, &tree->lru);
> +		spin_unlock(&tree->lock);
> +		return 0;
> +	}
> +	spin_unlock(&tree->lock);
> +
> +	/*
> +	 * We don't have to unlink from the rbtree because zswap_flush_entry()
> +	 * or zswap_frontswap_invalidate page() has already done this for us if we
> +	 * are the last reference.
> +	 */
> +	/* free */
> +	zs_free(tree->pool, entry->handle);
> +	zswap_entry_cache_free(entry);
> +	atomic_dec(&zswap_stored_pages);
> +
>  	return 0;
>  }
>  
> @@ -492,14 +876,27 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
>  {
>  	struct zswap_tree *tree = zswap_trees[type];
>  	struct zswap_entry *entry;
> +	int refcount;
>  
>  	/* find */
>  	spin_lock(&tree->lock);
>  	entry = zswap_rb_search(&tree->rbroot, offset);
> +	if (!entry) {
> +		/* entry was flushed */
> +		spin_unlock(&tree->lock);
> +		return;
> +	}
>  
> -	/* remove from rbtree */
> +	/* remove from rbtree and lru */
>  	rb_erase(&entry->rbnode, &tree->rbroot);
> +	if (entry->lru.next != LIST_POISON1)
> +		list_del(&entry->lru);
> +	refcount = zswap_entry_put(entry);
>  	spin_unlock(&tree->lock);
> +	if (refcount) {
> +		/* must be flushing */
> +		return;
> +	}
>  
>  	/* free */
>  	zs_free(tree->pool, entry->handle);
> @@ -528,6 +925,7 @@ static void zswap_frontswap_invalidate_area(unsigned type)
>  		node = next;
>  	}
>  	tree->rbroot = RB_ROOT;
> +	INIT_LIST_HEAD(&tree->lru);
>  	spin_unlock(&tree->lock);
>  }
>  
> @@ -543,6 +941,7 @@ static void zswap_frontswap_init(unsigned type)
>  	if (!tree->pool)
>  		goto freetree;
>  	tree->rbroot = RB_ROOT;
> +	INIT_LIST_HEAD(&tree->lru);
>  	spin_lock_init(&tree->lock);
>  	zswap_trees[type] = tree;
>  	return;
> @@ -578,20 +977,32 @@ static int __init zswap_debugfs_init(void)
>  	if (!zswap_debugfs_root)
>  		return -ENOMEM;
>  
> +	debugfs_create_u64("saved_by_flush", S_IRUGO,
> +			zswap_debugfs_root, &zswap_saved_by_flush);
>  	debugfs_create_u64("pool_limit_hit", S_IRUGO,
>  			zswap_debugfs_root, &zswap_pool_limit_hit);
> +	debugfs_create_u64("reject_flush_attempted", S_IRUGO,
> +			zswap_debugfs_root, &zswap_flush_attempted);
> +	debugfs_create_u64("reject_tmppage_fail", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_tmppage_fail);
> +	debugfs_create_u64("reject_flush_fail", S_IRUGO,
> +			zswap_debugfs_root, &zswap_reject_flush_fail);
>  	debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
>  			zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
>  	debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
>  			zswap_debugfs_root, &zswap_reject_kmemcache_fail);
>  	debugfs_create_u64("reject_compress_poor", S_IRUGO,
>  			zswap_debugfs_root, &zswap_reject_compress_poor);
> +	debugfs_create_u64("flushed_pages", S_IRUGO,
> +			zswap_debugfs_root, &zswap_flushed_pages);
>  	debugfs_create_u64("duplicate_entry", S_IRUGO,
>  			zswap_debugfs_root, &zswap_duplicate_entry);
>  	debugfs_create_atomic_t("pool_pages", S_IRUGO,
>  			zswap_debugfs_root, &zswap_pool_pages);
>  	debugfs_create_atomic_t("stored_pages", S_IRUGO,
>  			zswap_debugfs_root, &zswap_stored_pages);
> +	debugfs_create_atomic_t("outstanding_flushes", S_IRUGO,
> +			zswap_debugfs_root, &zswap_outstanding_flushes);
>  
>  	return 0;
>  }
> @@ -627,6 +1038,10 @@ static int __init init_zswap(void)
>  		pr_err("zswap: page pool initialization failed\n");
>  		goto pagepoolfail;
>  	}
> +	if (zswap_tmppage_pool_create()) {
> +		pr_err("zswap: workmem pool initialization failed\n");
> +		goto tmppoolfail;
> +	}
>  	if (zswap_comp_init()) {
>  		pr_err("zswap: compressor initialization failed\n");
>  		goto compfail;
> @@ -642,6 +1057,8 @@ static int __init init_zswap(void)
>  pcpufail:
>  	zswap_comp_exit();
>  compfail:
> +	zswap_tmppage_pool_destroy();
> +tmppoolfail:
>  	zswap_page_pool_destroy();
>  pagepoolfail:
>  	zswap_entry_cache_destory();
> -- 
> 1.8.1.1
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-02-01  1:39 ` Simon Jeons
@ 2013-02-01 15:13   ` Seth Jennings
  2013-02-03  0:17     ` Simon Jeons
  2013-02-04  1:03     ` Simon Jeons
  0 siblings, 2 replies; 34+ messages in thread
From: Seth Jennings @ 2013-02-01 15:13 UTC (permalink / raw)
  To: Simon Jeons
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 01/31/2013 07:39 PM, Simon Jeons wrote:
> Hi Seth,
> On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
<snip>
>> Performance, Kernel Building:
>>
>> Setup
>> ========
>> Gentoo w/ kernel v3.7-rc7
>> Quad-core i5-2500 @ 3.3GHz
>> 512MB DDR3 1600MHz (limited with mem=512m on boot)
>> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
>> majflt are major page faults reported by the time command
>> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
>> then make -jN
>>
>> Summary
>> ========
>> * Zswap reduces I/O and improves performance at all swap pressure levels.
>>
>> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
>>   over 1.5GB of I/O, and cut runtime in half.
> 
> How to get your benchmark?

It's just kernel building.  So "make" :)

I intentionally choose this workload so people wouldn't have to jump
through hoops to replicate the results.

Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 3/7] zswap: add to mm/
  2013-02-01  2:38       ` Minchan Kim
@ 2013-02-01 15:31         ` Seth Jennings
  2013-02-01 17:46           ` Seth Jennings
  0 siblings, 1 reply; 34+ messages in thread
From: Seth Jennings @ 2013-02-01 15:31 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 01/31/2013 08:38 PM, Minchan Kim wrote:
> On Thu, Jan 31, 2013 at 01:06:46PM -0600, Seth Jennings wrote:
>> On 01/31/2013 01:07 AM, Minchan Kim wrote:
>>> On Tue, Jan 29, 2013 at 03:40:23PM -0600, Seth Jennings wrote:
>>>> zswap is a thin compression backend for frontswap. It receives
>>>> pages from frontswap and attempts to store them in a compressed
>>>> memory pool, resulting in an effective partial memory reclaim and
>>>> dramatically reduced swap device I/O.
>>>>
>>>> Additionally, in most cases, pages can be retrieved from this
>>>> compressed store much more quickly than reading from tradition
>>>> swap devices resulting in faster performance for many workloads.
>>>>
>>>> This patch adds the zswap driver to mm/
>>>>
>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>> ---
>>>>  mm/Kconfig  |  15 ++
>>>>  mm/Makefile |   1 +
>>>>  mm/zswap.c  | 656 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>  3 files changed, 672 insertions(+)
>>>>  create mode 100644 mm/zswap.c
>>>>
>>>> diff --git a/mm/Kconfig b/mm/Kconfig
>>>> index 278e3ab..14b9acb 100644
>>>> --- a/mm/Kconfig
>>>> +++ b/mm/Kconfig
>>>> @@ -446,3 +446,18 @@ config FRONTSWAP
>>>>  	  and swap data is stored as normal on the matching swap device.
>>>>  
>>>>  	  If unsure, say Y to enable frontswap.
>>>> +
>>>> +config ZSWAP
>>>> +	bool "In-kernel swap page compression"
>>>> +	depends on FRONTSWAP && CRYPTO
>>>> +	select CRYPTO_LZO
>>>> +	select ZSMALLOC
>>>
>>> Again, I'm asking why zswap should have a dependent on CRPYTO?
>>> Couldn't we support it as a option? I'd like to use zswap without CRYPTO
>>> like zram.
>>
>> The reason we need CRYPTO is that zswap uses it to support a pluggable
>> compression model.  zswap can use any compressor that has a crypto API
>> driver.  zswap has _symbol dependencies_ on CRYPTO.  If it isn't
>> selected, the build breaks.
> 
> I think we can factor out compressoin part and remove dependency
> at compile time by Kconfig. No?

I'm still not following.  How would one "factor out" the crypto API
dependency when we use it to access the compressor modules.

The only thing I can think you're saying is to hack up the code with
ifdefs to call the lzo code directly based on a Kconfig option.  I
really hope you aren't saying that though :-/

> Of course, if we disable CRYPTO in Kconfig,
> we lost pluggable model but not a problem for embedded system.

The pluggable model is _very_ necessary for us because we use it to
access our hardware compression accelerator.  We do not use lzo in
that case.  We use 842 (crypto/842.c and drivers/crypto/nx/nx-842.c).

I'm not sure why we are misunderstanding on this.  Is there a specific
objection to depending the crypto API here? I understand that you are
thinking about embedded systems.  Does the enabling CRYPTO and
CRYPTO_LZO add significant size to the kernel or something?  Just
trying to understand why this is a problem.

Thanks,
Seth

> 
> Anyway, If it's a burden for you at a moment, I'm not going to insist on it.
> Will do it for myself.




^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 3/7] zswap: add to mm/
  2013-02-01 15:31         ` Seth Jennings
@ 2013-02-01 17:46           ` Seth Jennings
  0 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-02-01 17:46 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 02/01/2013 09:31 AM, Seth Jennings wrote:
> On 01/31/2013 08:38 PM, Minchan Kim wrote:
>> On Thu, Jan 31, 2013 at 01:06:46PM -0600, Seth Jennings wrote:
>>> On 01/31/2013 01:07 AM, Minchan Kim wrote:
>>>> On Tue, Jan 29, 2013 at 03:40:23PM -0600, Seth Jennings wrote:
>>>>> zswap is a thin compression backend for frontswap. It receives
>>>>> pages from frontswap and attempts to store them in a compressed
>>>>> memory pool, resulting in an effective partial memory reclaim and
>>>>> dramatically reduced swap device I/O.
>>>>>
>>>>> Additionally, in most cases, pages can be retrieved from this
>>>>> compressed store much more quickly than reading from tradition
>>>>> swap devices resulting in faster performance for many workloads.
>>>>>
>>>>> This patch adds the zswap driver to mm/
>>>>>
>>>>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>>>>> ---
>>>>>  mm/Kconfig  |  15 ++
>>>>>  mm/Makefile |   1 +
>>>>>  mm/zswap.c  | 656 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>  3 files changed, 672 insertions(+)
>>>>>  create mode 100644 mm/zswap.c
>>>>>
>>>>> diff --git a/mm/Kconfig b/mm/Kconfig
>>>>> index 278e3ab..14b9acb 100644
>>>>> --- a/mm/Kconfig
>>>>> +++ b/mm/Kconfig
>>>>> @@ -446,3 +446,18 @@ config FRONTSWAP
>>>>>  	  and swap data is stored as normal on the matching swap device.
>>>>>  
>>>>>  	  If unsure, say Y to enable frontswap.
>>>>> +
>>>>> +config ZSWAP
>>>>> +	bool "In-kernel swap page compression"
>>>>> +	depends on FRONTSWAP && CRYPTO
>>>>> +	select CRYPTO_LZO
>>>>> +	select ZSMALLOC
>>>>
>>>> Again, I'm asking why zswap should have a dependent on CRPYTO?
>>>> Couldn't we support it as a option? I'd like to use zswap without CRYPTO
>>>> like zram.
>>>
>>> The reason we need CRYPTO is that zswap uses it to support a pluggable
>>> compression model.  zswap can use any compressor that has a crypto API
>>> driver.  zswap has _symbol dependencies_ on CRYPTO.  If it isn't
>>> selected, the build breaks.
>>
>> I think we can factor out compressoin part and remove dependency
>> at compile time by Kconfig. No?
> 
> I'm still not following.  How would one "factor out" the crypto API
> dependency when we use it to access the compressor modules.
> 
> The only thing I can think you're saying is to hack up the code with
> ifdefs to call the lzo code directly based on a Kconfig option.  I
> really hope you aren't saying that though :-/

Looking into this more, I found out that INET also selects CRYPTO, so
unless the kernel is not doing networking, CRYPTO will be enabled
anyway.  EXT4 also selects it.

> 
>> Of course, if we disable CRYPTO in Kconfig,
>> we lost pluggable model but not a problem for embedded system.
> 
> The pluggable model is _very_ necessary for us because we use it to
> access our hardware compression accelerator.  We do not use lzo in
> that case.  We use 842 (crypto/842.c and drivers/crypto/nx/nx-842.c).
> 
> I'm not sure why we are misunderstanding on this.  Is there a specific
> objection to depending the crypto API here? I understand that you are
> thinking about embedded systems.  Does the enabling CRYPTO and
> CRYPTO_LZO add significant size to the kernel or something?

If size is the concern, I did a quick size diff between a vmlinux with
and without CRYPTO:
			
		text	data	bss	dec
CRYPTO=n	4747622	602704	7774208	13124534
CRYPTO=y	4755437	602960	7774208	13132605
diff		7815	256	0	8071

Bottom line is CRYPTO adds about 8k to vmlinux.

Thanks,
Seth

>Just trying to understand why this is a problem.
> 
> Thanks,
> Seth
> 
>>
>> Anyway, If it's a burden for you at a moment, I'm not going to insist on it.
>> Will do it for myself.
> 
> 
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-02-01 15:13   ` Seth Jennings
@ 2013-02-03  0:17     ` Simon Jeons
  2013-02-04 14:56       ` Seth Jennings
  2013-02-04  1:03     ` Simon Jeons
  1 sibling, 1 reply; 34+ messages in thread
From: Simon Jeons @ 2013-02-03  0:17 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Fri, 2013-02-01 at 09:13 -0600, Seth Jennings wrote:
> On 01/31/2013 07:39 PM, Simon Jeons wrote:
> > Hi Seth,
> > On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
> <snip>
> >> Performance, Kernel Building:
> >>
> >> Setup
> >> ========
> >> Gentoo w/ kernel v3.7-rc7
> >> Quad-core i5-2500 @ 3.3GHz
> >> 512MB DDR3 1600MHz (limited with mem=512m on boot)
> >> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
> >> majflt are major page faults reported by the time command
> >> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
> >> then make -jN
> >>
> >> Summary
> >> ========
> >> * Zswap reduces I/O and improves performance at all swap pressure levels.
> >>
> >> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
> >>   over 1.5GB of I/O, and cut runtime in half.
> > 
> > How to get your benchmark?
> 
> It's just kernel building.  So "make" :)
> 
> I intentionally choose this workload so people wouldn't have to jump
> through hoops to replicate the results.

Thanks for your clarify. zswap is belong to in-kernel compression,
correct? then what's the difference between in-memory compression and
in-kernel compression?

> 
> Seth
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-02-01 15:13   ` Seth Jennings
  2013-02-03  0:17     ` Simon Jeons
@ 2013-02-04  1:03     ` Simon Jeons
  2013-02-04 15:07       ` Seth Jennings
  1 sibling, 1 reply; 34+ messages in thread
From: Simon Jeons @ 2013-02-04  1:03 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On Fri, 2013-02-01 at 09:13 -0600, Seth Jennings wrote:
> On 01/31/2013 07:39 PM, Simon Jeons wrote:
> > Hi Seth,
> > On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
> <snip>
> >> Performance, Kernel Building:
> >>
> >> Setup
> >> ========
> >> Gentoo w/ kernel v3.7-rc7
> >> Quad-core i5-2500 @ 3.3GHz
> >> 512MB DDR3 1600MHz (limited with mem=512m on boot)
> >> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
> >> majflt are major page faults reported by the time command
> >> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
> >> then make -jN
> >>
> >> Summary
> >> ========
> >> * Zswap reduces I/O and improves performance at all swap pressure levels.
> >>
> >> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
> >>   over 1.5GB of I/O, and cut runtime in half.
> > 
> > How to get your benchmark?
> 
> It's just kernel building.  So "make" :)
> 
> I intentionally choose this workload so people wouldn't have to jump
> through hoops to replicate the results.

Since there already have zram which can handle anonymous pages
compression, why need zswap? What's the difference of design between
zram and zswap? 

> 
> Seth
> 



^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-02-03  0:17     ` Simon Jeons
@ 2013-02-04 14:56       ` Seth Jennings
  0 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-02-04 14:56 UTC (permalink / raw)
  To: Simon Jeons
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 02/02/2013 06:17 PM, Simon Jeons wrote:
> On Fri, 2013-02-01 at 09:13 -0600, Seth Jennings wrote:
>> On 01/31/2013 07:39 PM, Simon Jeons wrote:
>>> Hi Seth,
>>> On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
>> <snip>
>>>> Performance, Kernel Building:
>>>>
>>>> Setup
>>>> ========
>>>> Gentoo w/ kernel v3.7-rc7
>>>> Quad-core i5-2500 @ 3.3GHz
>>>> 512MB DDR3 1600MHz (limited with mem=512m on boot)
>>>> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
>>>> majflt are major page faults reported by the time command
>>>> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
>>>> then make -jN
>>>>
>>>> Summary
>>>> ========
>>>> * Zswap reduces I/O and improves performance at all swap pressure levels.
>>>>
>>>> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
>>>>   over 1.5GB of I/O, and cut runtime in half.
>>>
>>> How to get your benchmark?
>>
>> It's just kernel building.  So "make" :)
>>
>> I intentionally choose this workload so people wouldn't have to jump
>> through hoops to replicate the results.
> 
> Thanks for your clarify. zswap is belong to in-kernel compression,
> correct? then what's the difference between in-memory compression and
> in-kernel compression?

Dan made this distinction (possibly unintentionally) in his email.
For me "in-kernel" means "done by the kernel" and "in-memory" means
"stored in RAM".  So it is compression of RAM pages done by the kernel
and stored back in RAM.

I sum up the functionality of zswap as "compressed swap caching",
which I think better conveys what zswap does exactly.

Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
  2013-02-04  1:03     ` Simon Jeons
@ 2013-02-04 15:07       ` Seth Jennings
  0 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-02-04 15:07 UTC (permalink / raw)
  To: Simon Jeons
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 02/03/2013 07:03 PM, Simon Jeons wrote:
> On Fri, 2013-02-01 at 09:13 -0600, Seth Jennings wrote:
>> On 01/31/2013 07:39 PM, Simon Jeons wrote:
>>> Hi Seth,
>>> On Tue, 2013-01-29 at 15:40 -0600, Seth Jennings wrote:
>> <snip>
>>>> Performance, Kernel Building:
>>>>
>>>> Setup
>>>> ========
>>>> Gentoo w/ kernel v3.7-rc7
>>>> Quad-core i5-2500 @ 3.3GHz
>>>> 512MB DDR3 1600MHz (limited with mem=512m on boot)
>>>> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
>>>> majflt are major page faults reported by the time command
>>>> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
>>>> then make -jN
>>>>
>>>> Summary
>>>> ========
>>>> * Zswap reduces I/O and improves performance at all swap pressure levels.
>>>>
>>>> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
>>>>   over 1.5GB of I/O, and cut runtime in half.
>>>
>>> How to get your benchmark?
>>
>> It's just kernel building.  So "make" :)
>>
>> I intentionally choose this workload so people wouldn't have to jump
>> through hoops to replicate the results.
> 
> Since there already have zram which can handle anonymous pages
> compression, why need zswap? What's the difference of design between
> zram and zswap? 

zram is implemented is a virtual block device.  It interfaces with the
block device layer, not the swap code.  In fact, zram can be used as a
generic compressed RAM disk, not only for compressed swap.  One can
think of it as a RAM disk + compression.

So zram is the actual swap device while zswap is a caching layer above
the swap device.  zswap is not the swap device itself like zram.

Hope this clears up the difference :)

Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 0/7] zswap: compressed swap caching
       [not found] ` <5110287A.5050200@linux.vnet.ibm.com>
@ 2013-02-04 21:45   ` Seth Jennings
  0 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-02-04 21:45 UTC (permalink / raw)
  To: Seth Jennings
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel, Joe Perches

On 02/04/2013 03:30 PM, Seth Jennings wrote:
> On 01/29/2013 03:40 PM, Seth Jennings wrote:
>> Sorry for the churn but just this set might be easier to review.
>> The code required for the flushing is in a separate patch now
>> as requested.

I've got a large and valuable body of feedback to integrate for v5.
Thanks to all that reviewed/commented!

It will take a little time to compile it all and coordinate with
Minchan and Nitin on the additional documentation and rationale for
zsmalloc.

I just wanted to acknowledge the feedback and state that I'm working
on v5 and I'll get it out as soon as I can.

Thanks,
Seth

>
>>
>> Changelog:
>>
>> v4:
>> * Added Acks (Minchan)
>> * Separated flushing functionality into standalone patch
>>   for easier review (Minchan)
>> * fix comment on zswap enabled attribute (Minchan)
>> * add TODO for dynamic mempool size (Minchan)
>> * and check for NULL in zswap_free_page() (Minchan)
>> * add missing zs_free() in error path (Minchan)
>> * TODO: add comments for flushing/refcounting (Minchan)
>>
>> NOTE: To build, read this:
>> http://lkml.org/lkml/2013/1/28/586
>>
>> v3:
>> * Dropped the zsmalloc patches from the set, except the promotion patch
>>   which has be converted to a rename patch (vs full diff).  The dropped
>>   patches have been Acked and are going into Greg's staging tree soon.
>> * Separated [PATCHv2 7/9] into two patches since it makes changes for two
>>   different reasons (Minchan)
>> * Moved ZSWAP_MAX_OUTSTANDING_FLUSHES near the top in zswap.c (Rik)
>> * Rebase to v3.8-rc5. linux-next is a little volatile with the
>>   swapper_space per type changes which will effect this patchset.
>> * TODO: Move some stats from debugfs to sysfs. Which ones? (Rik)
>>
>> v2:
>> * Rename zswap_fs_* functions to zswap_frontswap_* to avoid
>>   confusion with "filesystem"
>> * Add comment about what the tree lock protects
>> * Remove "#if 0" code (should have been done before)
>> * Break out changes to existing swap code into separate patch
>> * Fix blank line EOF warning on documentation file
>> * Rebase to next-20130107
>>
>> Zswap Overview:
>>
>> Zswap is a lightweight compressed cache for swap pages. It takes
>> pages that are in the process of being swapped out and attempts to
>> compress them into a dynamically allocated RAM-based memory pool.
>> If this process is successful, the writeback to the swap device is
>> deferred and, in many cases, avoided completely.  This results in
>> a significant I/O reduction and performance gains for systems that
>> are swapping.
>>
>> The results of a kernel building benchmark indicate a
>> runtime reduction of 53% and an I/O reduction 76% with zswap vs normal
>> swapping with a kernel build under heavy memory pressure (see
>> Performance section for more).
>>
>> Some addition performance metrics regarding the performance
>> improvements and I/O reductions that can be achieved using zswap as
>> measured by SPECjbb are provided here:
>>
>> http://ibm.co/VCgHvM
>>
>> These results include runs on x86 and new results on Power7+ with
>> hardware compression acceleration.
>>
>> Of particular note is that zswap is able to evict pages from the compressed
>> cache, on an LRU basis, to the backing swap device when the compressed pool
>> reaches it size limit or the pool is unable to obtain additional pages
>> from the buddy allocator.  This eviction functionality had been identified
>> as a requirement in prior community discussions.
>>
>> Patchset Structure:
>> 1:   add atomic_t get/set to debugfs
>> 2:   promote zsmalloc to /lib
>> 3,4: changes to existing swap code for zswap
>> 5,6: add zswap and documentation
>>
>> Rationale:
>>
>> Zswap provides compressed swap caching that basically trades CPU cycles
>> for reduced swap I/O.  This trade-off can result in a significant
>> performance improvement as reads to/writes from to the compressed
>> cache almost always faster that reading from a swap device
>> which incurs the latency of an asynchronous block I/O read.
>>
>> Some potential benefits:
>> * Desktop/laptop users with limited RAM capacities can mitigate the
>>     performance impact of swapping.
>> * Overcommitted guests that share a common I/O resource can
>>     dramatically reduce their swap I/O pressure, avoiding heavy
>>     handed I/O throttling by the hypervisor.  This allows more work
>>     to get done with less impact to the guest workload and guests
>>     sharing the I/O subsystem
>> * Users with SSDs as swap devices can extend the life of the device by
>>     drastically reducing life-shortening writes.
>>
>> Compressed swap is also provided in zcache, along with page cache
>> compression and RAM clustering through RAMSter. Zswap seeks to deliver
>> the benefit of swap  compression to users in a discrete function.
>> This design decision is akin to Unix design philosophy of doing one
>> thing well, it leaves file cache compression and other features
>> for separate code.
>>
>> Design:
>>
>> Zswap receives pages for compression through the Frontswap API and
>> is able to evict pages from its own compressed pool on an LRU basis
>> and write them back to the backing swap device in the case that the
>> compressed pool is full or unable to secure additional pages from
>> the buddy allocator.
>>
>> Zswap makes use of zsmalloc for the managing the compressed memory
>> pool.  This is because zsmalloc is specifically designed to minimize
>> fragmentation on large (> PAGE_SIZE/2) allocation sizes.  Each
>> allocation in zsmalloc is not directly accessible by address.
>> Rather, a handle is return by the allocation routine and that handle
>> must be mapped before being accessed.  The compressed memory pool grows
>> on demand and shrinks as compressed pages are freed.  The pool is
>> not preallocated.
>>
>> When a swap page is passed from frontswap to zswap, zswap maintains
>> a mapping of the swap entry, a combination of the swap type and swap
>> offset, to the zsmalloc handle that references that compressed swap
>> page.  This mapping is achieved with a red-black tree per swap type.
>> The swap offset is the search key for the tree nodes.
>>
>> Zswap seeks to be simple in its policies.  Sysfs attributes allow for
>> two user controlled policies:
>> * max_compression_ratio - Maximum compression ratio, as as percentage,
>>     for an acceptable compressed page. Any page that does not compress
>>     by at least this ratio will be rejected.
>> * max_pool_percent - The maximum percentage of memory that the compressed
>>     pool can occupy.
>>
>> To enabled zswap, the "enabled" attribute must be set to 1 at boot time.
>>
>> Zswap allows the compressor to be selected at kernel boot time by
>> setting the “compressor” attribute.  The default compressor is lzo.
>>
>> A debugfs interface is provided for various statistic about pool size,
>> number of pages stored, and various counters for the reasons pages
>> are rejected.
>>
>> Performance, Kernel Building:
>>
>> Setup
>> ========
>> Gentoo w/ kernel v3.7-rc7
>> Quad-core i5-2500 @ 3.3GHz
>> 512MB DDR3 1600MHz (limited with mem=512m on boot)
>> Filesystem and swap on 80GB HDD (about 58MB/s with hdparm -t)
>> majflt are major page faults reported by the time command
>> pswpin/out is the delta of pswpin/out from /proc/vmstat before and after
>> the make -jN
>>
>> Summary
>> ========
>> * Zswap reduces I/O and improves performance at all swap pressure levels.
>>
>> * Under heavy swaping at 24 threads, zswap reduced I/O by 76%, saving
>>   over 1.5GB of I/O, and cut runtime in half.
>>
>> Details
>> ========
>> I/O (in pages)
>> 	base				zswap				change	change
>> N	pswpin	pswpout	majflt	I/O sum	pswpin	pswpout	majflt	I/O sum	%I/O	MB
>> 8	1	335	291	627	0	0	249	249	-60%	1
>> 12	3688	14315	5290	23293	123	860	5954	6937	-70%	64
>> 16	12711	46179	16803	75693	2936	7390	46092	56418	-25%	75
>> 20	42178	133781	49898	225857	9460	28382	92951	130793	-42%	371
>> 24	96079	357280	105242	558601	7719	18484	109309	135512	-76%	1653
>>
>> Runtime (in seconds)
>> N	base	zswap	%change
>> 8	107	107	0%
>> 12	128	110	-14%
>> 16	191	179	-6%
>> 20	371	240	-35%
>> 24	570	267	-53%
>>
>> %CPU utilization (out of 400% on 4 cpus)
>> N	base	zswap	%change
>> 8	317	319	1%
>> 12	267	311	16%
>> 16	179	191	7%
>> 20	94	143	52%
>> 24	60	128	113%
>>
>>
>> Seth Jennings (7):
>>   debugfs: add get/set for atomic types
>>   zsmalloc: promote to lib/
>>   zswap: add to mm/
>>   mm: break up swap_writepage() for frontswap backends
>>   mm: allow for outstanding swap writeback accounting
>>   zswap: add flushing support
>>   zswap: add documentation
>>
>>  Documentation/vm/zswap.txt                         |   73 ++
>>  drivers/staging/Kconfig                            |    2 -
>>  drivers/staging/Makefile                           |    1 -
>>  drivers/staging/zcache/zcache-main.c               |    3 +-
>>  drivers/staging/zram/zram_drv.h                    |    3 +-
>>  drivers/staging/zsmalloc/Kconfig                   |   10 -
>>  drivers/staging/zsmalloc/Makefile                  |    3 -
>>  fs/debugfs/file.c                                  |   42 +
>>  include/linux/debugfs.h                            |    2 +
>>  include/linux/swap.h                               |    4 +
>>  .../staging/zsmalloc => include/linux}/zsmalloc.h  |    0
>>  lib/Kconfig                                        |   18 +
>>  lib/Makefile                                       |    1 +
>>  .../zsmalloc/zsmalloc-main.c => lib/zsmalloc.c     |    3 +-
>>  mm/Kconfig                                         |   15 +
>>  mm/Makefile                                        |    1 +
>>  mm/page_io.c                                       |   22 +-
>>  mm/swap_state.c                                    |    2 +-
>>  mm/zswap.c                                         | 1073 ++++++++++++++++++++
>>  19 files changed, 1250 insertions(+), 28 deletions(-)
>>  create mode 100644 Documentation/vm/zswap.txt
>>  delete mode 100644 drivers/staging/zsmalloc/Kconfig
>>  delete mode 100644 drivers/staging/zsmalloc/Makefile
>>  rename {drivers/staging/zsmalloc => include/linux}/zsmalloc.h (100%)
>>  rename drivers/staging/zsmalloc/zsmalloc-main.c => lib/zsmalloc.c (99%)
>>  create mode 100644 mm/zswap.c
>>
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 6/7] zswap: add flushing support
  2013-02-01  7:27   ` Minchan Kim
@ 2013-02-13  6:24     ` Seth Jennings
  0 siblings, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-02-13  6:24 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Greg Kroah-Hartman, Nitin Gupta,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 02/01/2013 01:27 AM, Minchan Kim wrote:
> On Tue, Jan 29, 2013 at 03:40:26PM -0600, Seth Jennings wrote:
>> This patchset adds support for flush pages out of the compressed
>> pool to the swap device
>>

Thanks for the review Minchan! Sorry for the delayed response.  I'm
prepping v5 for posting.

> 
> I know you don't have a enough time since you sent previous patch.
> Please add lots of words next time.
> 
> 1. advertise "awesome feature", which igrate from zswap to real swap device
> 2. What's the tmppage?
> 3. frontswap_load/store/invalidate/flush race and object life time
> 4. policy to reclaim.

Will do.

> 
>> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
>> ---
>>  mm/zswap.c | 451 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
>>  1 file changed, 434 insertions(+), 17 deletions(-)
>>
>> diff --git a/mm/zswap.c b/mm/zswap.c
>> index a6c2928..b8e5673 100644
>> --- a/mm/zswap.c
>> +++ b/mm/zswap.c
>> @@ -34,6 +34,12 @@
>>  #include <linux/mempool.h>
>>  #include <linux/zsmalloc.h>
>>  
>> +#include <linux/mm_types.h>
>> +#include <linux/page-flags.h>
>> +#include <linux/swapops.h>
>> +#include <linux/writeback.h>
>> +#include <linux/pagemap.h>
>> +
>>  /*********************************
>>  * statistics
>>  **********************************/
>> @@ -41,6 +47,8 @@
>>  static atomic_t zswap_pool_pages = ATOMIC_INIT(0);
>>  /* The number of compressed pages currently stored in zswap */
>>  static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
>> +/* The number of outstanding pages awaiting writeback */
>> +static atomic_t zswap_outstanding_flushes = ATOMIC_INIT(0);
>>  
>>  /*
>>   * The statistics below are not protected from concurrent access for
>> @@ -49,9 +57,14 @@ static atomic_t zswap_stored_pages = ATOMIC_INIT(0);
>>   * certain event is occurring.
>>  */
>>  static u64 zswap_pool_limit_hit;
>> +static u64 zswap_flushed_pages;
>>  static u64 zswap_reject_compress_poor;
>> +static u64 zswap_flush_attempted;
>> +static u64 zswap_reject_tmppage_fail;
> 
> What's does it mean? Hmm, I looked at the code.
> It means fail to allocator for tmppage.

I'll add a comment for this and additional information about the roll of
the tmppage mechanism in the code since it's a little unusual.

> 
> How about "zswap_tmppage_alloc_fail"?

The method to my naming madness was that any stat that counted a reason
for a failed store started with zswap_reject_*

> 
>> +static u64 zswap_reject_flush_fail;
> 
> I am confused, too so looked at code but fail to find usecase.
> Remove?

Good call. I'll remove.  Must have been left over from something else.

> 
>>  static u64 zswap_reject_zsmalloc_fail;
> 
> We can remove this because get it by (zswap_flush_attempted - zswap_saved_by_flush)
> 

This is true.  I'll keep it in for now since it's not completely obvious
is can be derived from other stats without looking at the code.

>>  static u64 zswap_reject_kmemcache_fail;
>> +static u64 zswap_saved_by_flush;
> 
> I can't think better naming but need a comment at a minimum.

Yeah...

> 
>>  static u64 zswap_duplicate_entry;
>>  
>>  /*********************************
>> @@ -80,6 +93,14 @@ static unsigned int zswap_max_compression_ratio = 80;
>>  module_param_named(max_compression_ratio,
>>  			zswap_max_compression_ratio, uint, 0644);
>>  
>> +/*
>> + * Maximum number of outstanding flushes allowed at any given time.
>> + * This is to prevent decompressing an unbounded number of compressed
>> + * pages into the swap cache all at once, and to help with writeback
>> + * congestion.
>> +*/
>> +#define ZSWAP_MAX_OUTSTANDING_FLUSHES 64
>> +
>>  /*********************************
>>  * compression functions
>>  **********************************/
>> @@ -145,14 +166,23 @@ static void zswap_comp_exit(void)
>>  **********************************/
> 
> Please introduce why we need LRU and refcount.

Will do.

> 
>>  struct zswap_entry {
>>  	struct rb_node rbnode;
>> +	struct list_head lru;
>> +	int refcount;
> 
> Just nit,
> I understand why you don't use atomic but it would be nice to
> write down it in description of patch for preventing unnecessary
> arguing.

Yes.  For the record, the reason is that we must take the tree lock in
order to change the refcount of an entry contained in the tree.
Incrementing an atomic under lock would be redundantly atomic.

> 
>>  	unsigned type;
>>  	pgoff_t offset;
>>  	unsigned long handle;
>>  	unsigned int length;
>>  };
>>  
>> +/*
>> + * The tree lock in the zswap_tree struct protects a few things:
>> + * - the rbtree
>> + * - the lru list
>> + * - the refcount field of each entry in the tree
>> + */
>>  struct zswap_tree {
>>  	struct rb_root rbroot;
>> +	struct list_head lru;
>>  	spinlock_t lock;
>>  	struct zs_pool *pool;
>>  };
>> @@ -184,6 +214,8 @@ static inline struct zswap_entry *zswap_entry_cache_alloc(gfp_t gfp)
>>  	entry = kmem_cache_alloc(zswap_entry_cache, gfp);
>>  	if (!entry)
>>  		return NULL;
>> +	INIT_LIST_HEAD(&entry->lru);
>> +	entry->refcount = 1;
>>  	return entry;
>>  }
>>  
>> @@ -192,6 +224,17 @@ static inline void zswap_entry_cache_free(struct zswap_entry *entry)
>>  	kmem_cache_free(zswap_entry_cache, entry);
>>  }
>>  
>> +static inline void zswap_entry_get(struct zswap_entry *entry)
>> +{
>> +	entry->refcount++;
>> +}
>> +
>> +static inline int zswap_entry_put(struct zswap_entry *entry)
>> +{
>> +	entry->refcount--;
>> +	return entry->refcount;
>> +}
>> +
>>  /*********************************
>>  * rbtree functions
>>  **********************************/
>> @@ -367,6 +410,278 @@ static struct zs_ops zswap_zs_ops = {
>>  };
>>  
>>  /*********************************
>> +* flush code
>> +**********************************/
> 
> As Andrew pointed out, let's change the naming.
> I'd like to imply "send back to original seat" instead of flush because
> the page is just intercepted by frontswap on the way.
> More thining about it, current implentation go with it but we can
> add indirection layer of swap so frontswap page could be migrated other
> swap slot to enhance swapout bandwidth so "migrate the swap page to elsewhere"
> is more proper. I'd like to depend on native speakers. :)

I'm going to use "writeback" for now since "resumed writeback" is what
I'm naming this process in zswap.
> 
>> +static void zswap_end_swap_write(struct bio *bio, int err)
>> +{
>> +	end_swap_bio_write(bio, err);
>> +	atomic_dec(&zswap_outstanding_flushes);
>> +	zswap_flushed_pages++;
>> +}
>> +
>> +/*
>> + * zswap_get_swap_cache_page
>> + *
>> + * This is an adaption of read_swap_cache_async()
> 
> Please write down what's this function's goal and why we need it
> if need because of function's complexity.

Will do.

> 
>> + *
>> + * If success, page is returned in retpage
>> + * Returns 0 if page was already in the swap cache, page is not locked
>> + * Returns 1 if the new page needs to be populated, page is locked
> 
>       Return -ENOMEM if the allocation is failed.
> 
>> + */
>> +static int zswap_get_swap_cache_page(swp_entry_t entry,
>> +				struct page **retpage)
>> +{
>> +	struct page *found_page, *new_page = NULL;
>> +	int err;
>> +
>> +	*retpage = NULL;
>> +	do {
>> +		/*
>> +		 * First check the swap cache.  Since this is normally
>> +		 * called after lookup_swap_cache() failed, re-calling
>> +		 * that would confuse statistics.
>> +		 */
>> +		found_page = find_get_page(&swapper_space, entry.val);
>> +		if (found_page)
>> +			break;
>> +
>> +		/*
>> +		 * Get a new page to read into from swap.
>> +		 */
>> +		if (!new_page) {
>> +			new_page = alloc_page(GFP_KERNEL);
>> +			if (!new_page)
>> +				break; /* Out of memory */
>> +		}
>> +
>> +		/*
>> +		 * call radix_tree_preload() while we can wait.
>> +		 */
>> +		err = radix_tree_preload(GFP_KERNEL);
>> +		if (err)
>> +			break;
>> +
>> +		/*
>> +		 * Swap entry may have been freed since our caller observed it.
>> +		 */
>> +		err = swapcache_prepare(entry);
>> +		if (err == -EEXIST) { /* seems racy */
>> +			radix_tree_preload_end();
>> +			continue;
>> +		}
>> +		if (err) { /* swp entry is obsolete ? */
>> +			radix_tree_preload_end();
>> +			break;
>> +		}
>> +
>> +		/* May fail (-ENOMEM) if radix-tree node allocation failed. */
>> +		__set_page_locked(new_page);
>> +		SetPageSwapBacked(new_page);
>> +		err = __add_to_swap_cache(new_page, entry);
>> +		if (likely(!err)) {
>> +			radix_tree_preload_end();
>> +			lru_cache_add_anon(new_page);
>> +			*retpage = new_page;
>> +			return 1;
>> +		}
>> +		radix_tree_preload_end();
>> +		ClearPageSwapBacked(new_page);
>> +		__clear_page_locked(new_page);
>> +		/*
>> +		 * add_to_swap_cache() doesn't return -EEXIST, so we can safely
>> +		 * clear SWAP_HAS_CACHE flag.
>> +		 */
>> +		swapcache_free(entry, NULL);
>> +	} while (err != -ENOMEM);
>> +
>> +	if (new_page)
>> +		page_cache_release(new_page);
>> +	if (!found_page)
>> +		return -ENOMEM;
>> +	*retpage = found_page;
>> +	return 0;
>> +}
>> +
>> +static int zswap_flush_entry(struct zswap_entry *entry)
>> +{
>> +	unsigned long type = entry->type;
>> +	struct zswap_tree *tree = zswap_trees[type];
>> +	struct page *page;
>> +	swp_entry_t swpentry;
>> +	u8 *src, *dst;
>> +	unsigned int dlen;
>> +	int ret, refcount;
>> +	struct writeback_control wbc = {
>> +		.sync_mode = WB_SYNC_NONE,
>> +	};
>> +
>> +	/* get/allocate page in the swap cache */
>> +	swpentry = swp_entry(type, entry->offset);
>> +	ret = zswap_get_swap_cache_page(swpentry, &page);
> 
> IMHO, zswap_get_swap_cache_page have to ruturn meaningful enum type
> so we have to use switch instead of if-else series.

Yes, that would be cleaner.  During development, I was hoping I could
find a way to use the existing read_swap_cache_async(), but I couldn't
figure out a clean way to do it.

> 
>> +	if (ret < 0)
>> +		return ret;
>> +	else if (ret) {
>> +		/* decompress */
>> +		dlen = PAGE_SIZE;
>> +		src = zs_map_object(tree->pool, entry->handle, ZS_MM_RO);
>> +		dst = kmap_atomic(page);
>> +		ret = zswap_comp_op(ZSWAP_COMPOP_DECOMPRESS, src, entry->length,
>> +				dst, &dlen);
>> +		kunmap_atomic(dst);
>> +		zs_unmap_object(tree->pool, entry->handle);
>> +		BUG_ON(ret);
>> +		BUG_ON(dlen != PAGE_SIZE);
>> +		SetPageUptodate(page);
>> +	} else {
>> +		/* page is already in the swap cache, ignore for now */
>> +		spin_lock(&tree->lock);
>> +		refcount = zswap_entry_put(entry);
>> +		spin_unlock(&tree->lock);
>> +
>> +		if (likely(refcount))
>> +			return 0;
> 
> It doesn't mean we are writing out and migrated so it would be better
> to return non-zero.
> 
>> +
>> +		/* if the refcount is zero, invalidate must have come in */
>> +		/* free */
>> +		zs_free(tree->pool, entry->handle);
>> +		zswap_entry_cache_free(entry);
>> +		atomic_dec(&zswap_stored_pages);
> 
> It would be better to factor out these functions.

Yes. I'll add this.

> 
> void free_or_destroy_or_something_zswap_entry(...)
> {
>         zs_free
>         zswap_entry_cache_free
>         atomic_dec
> }
> 
>> +
>> +		return 0;
>> +	}
>> +
>> +	/* start writeback */
>> +	SetPageReclaim(page);
>> +	/*
>> +	 * Return value is ignored here because it doesn't change anything
>> +	 * for us.  Page is returned unlocked.
>> +	 */
>> +	(void)__swap_writepage(page, &wbc, zswap_end_swap_write);
>> +	page_cache_release(page);
>> +	atomic_inc(&zswap_outstanding_flushes);
>> +
>> +	/* remove */
>> +	spin_lock(&tree->lock);
>> +	refcount = zswap_entry_put(entry);
>> +	if (refcount > 1) {
> 
> Let's be a kind.
>                 /* If the race happens with zswap_frontswap_load,
>                  * we move the zswap_entry into tail again because
>                  * it used recenlty so there is no point to remove
>                  * it in memory.
>                  * It could be duplicated copy between in-memory and
>                  * real swap device but no biggie.
>                  */

I didn't follow this one :-/

> 
>> +		/* load in progress, load will free */
>> +		spin_unlock(&tree->lock);
>> +		return 0;
>> +	}
>> +	if (refcount == 1)
>> +		/* no invalidate yet, remove from rbtree */
>> +		rb_erase(&entry->rbnode, &tree->rbroot);
>> +	spin_unlock(&tree->lock);
>> +
>> +	/* free */
>> +	zs_free(tree->pool, entry->handle);
>> +	zswap_entry_cache_free(entry);
>> +	atomic_dec(&zswap_stored_pages);
>> +
>> +	return 0;
>> +}
>> +
>> +static void zswap_flush_entries(unsigned type, int nr)
> 
> Just nit,
> It would be useful to return the number of migrated pages

I can do that.  Wouldn't be used though (for now).

> 
>> +{
>> +	struct zswap_tree *tree = zswap_trees[type];
>> +	struct zswap_entry *entry;
>> +	int i, ret;
>> +
>> +/*
>> + * This limits is arbitrary for now until a better
>> + * policy can be implemented. This is so we don't
>> + * eat all of RAM decompressing pages for writeback.
>> + */
> 
> Indent.

Yes.

> 
>> +	if (atomic_read(&zswap_outstanding_flushes) >
>> +		ZSWAP_MAX_OUTSTANDING_FLUSHES)
>> +		return;
>> +
>> +	for (i = 0; i < nr; i++) {
> 
> As other people already pointed out, LRU reclaim could be a
> problem because of spreading object across several slab.
> Just an idea. I'd like to reclaim per SLAB instead of object.
> For it, we have to add some age facility into zsmalloc, could
> be enalbed optionally by Kconfig.
> 
> The zsmalloc gives an age every allocated object.
> For example, we allocate some object following as
> 
> Object allocation orer : O(A), O(B), O(C), O(D), O(E)
> 
> so lru ordering is same.
> 
> LRU ordering : head-O(A)->O(B)->O(C)->O(D)->O(E)->tail
> (tail is recent used object)
> 
> The zsmalloc could give an age following as,
> 
> O(A,1), O(B,2), O(C,3), O(D,4), O(E, 5)
> 
> They are located in following as.
> 
>      zspage 1:4         zspage 2:5          zspage 3:6
> | O(A, 1), O(C, 3) | -- | O(E, 5) | -- | O(B, 2), O(D, 4) |
> 
> 
> When zswap try to reclaim, it needs smallest age's zspage
> so it is zspage 1 as age 4.
> It is likely to include less recently used object and we surely get a free page
> for us.
> 
>> +		/* dequeue from lru */
>> +		spin_lock(&tree->lock);
>> +		if (list_empty(&tree->lru)) {
>> +			spin_unlock(&tree->lock);
>> +			break;
>> +		}
>> +		entry = list_first_entry(&tree->lru,
>> +				struct zswap_entry, lru);
>> +		list_del(&entry->lru);
> 
> Use list_del_init and list_empty.

Ah yes, very good.

> 
>> +		zswap_entry_get(entry);
> 
> When do you decrease ref count if zswap_flush_entry fails by alloc fail?

Good catch.  I'll fix it up.

> 
>> +		spin_unlock(&tree->lock);
>> +		ret = zswap_flush_entry(entry);
>> +		if (ret) {
>> +			/* put back on the lru */
>> +			spin_lock(&tree->lock);
>> +			list_add(&entry->lru, &tree->lru);
>> +			spin_unlock(&tree->lock);
>> +		} else {
>> +			if (atomic_read(&zswap_outstanding_flushes) >
>> +				ZSWAP_MAX_OUTSTANDING_FLUSHES)
>> +				break;
>> +		}
>> +	}
>> +}
>> +
>> +/*******************************************
>> +* page pool for temporary compression result
>> +********************************************/
>> +#define ZSWAP_TMPPAGE_POOL_PAGES 16
>> +static LIST_HEAD(zswap_tmppage_list);
>> +static DEFINE_SPINLOCK(zswap_tmppage_lock);
>> +
>> +static void zswap_tmppage_pool_destroy(void)
>> +{
>> +	struct page *page, *tmppage;
>> +
>> +	spin_lock(&zswap_tmppage_lock);
>> +	list_for_each_entry_safe(page, tmppage, &zswap_tmppage_list, lru) {
>> +		list_del(&page->lru);
>> +		__free_pages(page, 1);
>> +	}
>> +	spin_unlock(&zswap_tmppage_lock);
>> +}
>> +
>> +static int zswap_tmppage_pool_create(void)
>> +{
>> +	int i;
>> +	struct page *page;
>> +
>> +	for (i = 0; i < ZSWAP_TMPPAGE_POOL_PAGES; i++) {
>> +		page = alloc_pages(GFP_KERNEL, 1);
>> +		if (!page) {
>> +			zswap_tmppage_pool_destroy();
>> +			return -ENOMEM;
>> +		}
>> +		spin_lock(&zswap_tmppage_lock);
>> +		list_add(&page->lru, &zswap_tmppage_list);
>> +		spin_unlock(&zswap_tmppage_lock);
>> +	}
>> +	return 0;
>> +}
>> +
>> +static inline struct page *zswap_tmppage_alloc(void)
>> +{
>> +	struct page *page;
>> +
>> +	spin_lock(&zswap_tmppage_lock);
>> +	if (list_empty(&zswap_tmppage_list)) {
>> +		spin_unlock(&zswap_tmppage_lock);
>> +		return NULL;
>> +	}
>> +	page = list_first_entry(&zswap_tmppage_list, struct page, lru);
>> +	list_del(&page->lru);
>> +	spin_unlock(&zswap_tmppage_lock);
>> +	return page;
>> +}
>> +
>> +static inline void zswap_tmppage_free(struct page *page)
>> +{
>> +	spin_lock(&zswap_tmppage_lock);
>> +	list_add(&page->lru, &zswap_tmppage_list);
>> +	spin_unlock(&zswap_tmppage_lock);
>> +}
>> +
>> +/*********************************
>>  * frontswap hooks
>>  **********************************/
>>  /* attempts to compress and store an single page */
>> @@ -378,7 +693,9 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>>  	unsigned int dlen = PAGE_SIZE;
>>  	unsigned long handle;
>>  	char *buf;
>> -	u8 *src, *dst;
>> +	u8 *src, *dst, *tmpdst;
>> +	struct page *tmppage;
>> +	bool flush_attempted = 0;
>>  
>>  	if (!tree) {
>>  		ret = -ENODEV;
>> @@ -392,12 +709,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>>  	kunmap_atomic(src);
>>  	if (ret) {
>>  		ret = -EINVAL;
>> -		goto putcpu;
>> +		goto freepage;
>>  	}
>>  	if ((dlen * 100 / PAGE_SIZE) > zswap_max_compression_ratio) {
>>  		zswap_reject_compress_poor++;
>>  		ret = -E2BIG;
>> -		goto putcpu;
>> +		goto freepage;
>>  	}
>>  
>>  	/* store */
>> @@ -405,15 +722,46 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>>  		__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
>>  			__GFP_NOWARN);
>>  	if (!handle) {
>> -		zswap_reject_zsmalloc_fail++;
>> -		ret = -ENOMEM;
>> -		goto putcpu;
>> +		zswap_flush_attempted++;
>> +		/*
>> +		 * Copy compressed buffer out of per-cpu storage so
>> +		 * we can re-enable preemption.
>> +		*/
>> +		tmppage = zswap_tmppage_alloc();
>> +		if (!tmppage) {
>> +			zswap_reject_tmppage_fail++;
>> +			ret = -ENOMEM;
>> +			goto freepage;
>> +		}
>> +		flush_attempted = 1;
>> +		tmpdst = page_address(tmppage);
>> +		memcpy(tmpdst, dst, dlen);
>> +		dst = tmpdst;
>> +		put_cpu_var(zswap_dstmem);
> 
> Just use copy for enabling preemption? I don't have a good idea but surely
> we could have better approach at the end. Let's think about it.
> 
>> +
>> +		/* try to free up some space */
>> +		/* TODO: replace with more targeted policy */
>> +		zswap_flush_entries(type, 16);
>> +		/* try again, allowing wait */
>> +		handle = zs_malloc(tree->pool, dlen,
>> +			__GFP_NORETRY | __GFP_HIGHMEM | __GFP_NOMEMALLOC |
>> +				__GFP_NOWARN);
>> +		if (!handle) {
>> +			/* still no space, fail */
>> +			zswap_reject_zsmalloc_fail++;
>> +			ret = -ENOMEM;
>> +			goto freepage;
>> +		}
>> +		zswap_saved_by_flush++;
>>  	}
>>  
>>  	buf = zs_map_object(tree->pool, handle, ZS_MM_WO);
>>  	memcpy(buf, dst, dlen);
>>  	zs_unmap_object(tree->pool, handle);
>> -	put_cpu_var(zswap_dstmem);
>> +	if (flush_attempted)
>> +		zswap_tmppage_free(tmppage);
>> +	else
>> +		put_cpu_var(zswap_dstmem);
>>  
>>  	/* allocate entry */
>>  	entry = zswap_entry_cache_alloc(GFP_KERNEL);
>> @@ -436,16 +784,19 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>>  		ret = zswap_rb_insert(&tree->rbroot, entry, &dupentry);
>>  		if (ret == -EEXIST) {
>>  			zswap_duplicate_entry++;
>> -
>> -			/* remove from rbtree */
>> +			/* remove from rbtree and lru */
>>  			rb_erase(&dupentry->rbnode, &tree->rbroot);
>> -			
>> -			/* free */
>> -			zs_free(tree->pool, dupentry->handle);
>> -			zswap_entry_cache_free(dupentry);
>> -			atomic_dec(&zswap_stored_pages);
>> +			if (dupentry->lru.next != LIST_POISON1)
>> +				list_del(&dupentry->lru);
>> +			if (!zswap_entry_put(dupentry)) {
>> +				/* free */
>> +				zs_free(tree->pool, dupentry->handle);
>> +				zswap_entry_cache_free(dupentry);
>> +				atomic_dec(&zswap_stored_pages);
>> +			}
>>  		}
>>  	} while (ret == -EEXIST);
>> +	list_add_tail(&entry->lru, &tree->lru);
>>  	spin_unlock(&tree->lock);
>>  
>>  	/* update stats */
>> @@ -453,8 +804,11 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, struct page *pag
>>  
>>  	return 0;
>>  
>> -putcpu:
>> -	put_cpu_var(zswap_dstmem);
>> +freepage:
>> +	if (flush_attempted)
>> +		zswap_tmppage_free(tmppage);
>> +	else
>> +		put_cpu_var(zswap_dstmem);
>>  reject:
>>  	return ret;
>>  }
>> @@ -469,10 +823,21 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page
>>  	struct zswap_entry *entry;
>>  	u8 *src, *dst;
>>  	unsigned int dlen;
>> +	int refcount;
>>  
>>  	/* find */
>>  	spin_lock(&tree->lock);
>>  	entry = zswap_rb_search(&tree->rbroot, offset);
>> +	if (!entry) {
>> +		/* entry was flushed */
>> +		spin_unlock(&tree->lock);
>> +		return -1;
>> +	}
> 
> Let's be a kind.
> 
>         /* flusher/invalidate can destroy entry under us */

Sure :)

Thanks Minchan!

Seth

>> +	zswap_entry_get(entry);
>> +
>> +	/* remove from lru */
>> +	if (entry->lru.next != LIST_POISON1)
>> +		list_del(&entry->lru);
> 
>         if (!list_empty(&entry->lru))
>                 list_del_init(&entry->lru);
> 
>>  	spin_unlock(&tree->lock);
>>  
>>  	/* decompress */
>> @@ -484,6 +849,25 @@ static int zswap_frontswap_load(unsigned type, pgoff_t offset, struct page *page
>>  	kunmap_atomic(dst);
>>  	zs_unmap_object(tree->pool, entry->handle);
>>  
>> +	spin_lock(&tree->lock);
>> +	refcount = zswap_entry_put(entry);
>> +	if (likely(refcount)) {
>> +		list_add_tail(&entry->lru, &tree->lru);
>> +		spin_unlock(&tree->lock);
>> +		return 0;
>> +	}
>> +	spin_unlock(&tree->lock);
>> +
>> +	/*
>> +	 * We don't have to unlink from the rbtree because zswap_flush_entry()
>> +	 * or zswap_frontswap_invalidate page() has already done this for us if we
>> +	 * are the last reference.
>> +	 */
>> +	/* free */
>> +	zs_free(tree->pool, entry->handle);
>> +	zswap_entry_cache_free(entry);
>> +	atomic_dec(&zswap_stored_pages);
>> +
>>  	return 0;
>>  }
>>  
>> @@ -492,14 +876,27 @@ static void zswap_frontswap_invalidate_page(unsigned type, pgoff_t offset)
>>  {
>>  	struct zswap_tree *tree = zswap_trees[type];
>>  	struct zswap_entry *entry;
>> +	int refcount;
>>  
>>  	/* find */
>>  	spin_lock(&tree->lock);
>>  	entry = zswap_rb_search(&tree->rbroot, offset);
>> +	if (!entry) {
>> +		/* entry was flushed */
>> +		spin_unlock(&tree->lock);
>> +		return;
>> +	}
>>  
>> -	/* remove from rbtree */
>> +	/* remove from rbtree and lru */
>>  	rb_erase(&entry->rbnode, &tree->rbroot);
>> +	if (entry->lru.next != LIST_POISON1)
>> +		list_del(&entry->lru);
>> +	refcount = zswap_entry_put(entry);
>>  	spin_unlock(&tree->lock);
>> +	if (refcount) {
>> +		/* must be flushing */
>> +		return;
>> +	}
>>  
>>  	/* free */
>>  	zs_free(tree->pool, entry->handle);
>> @@ -528,6 +925,7 @@ static void zswap_frontswap_invalidate_area(unsigned type)
>>  		node = next;
>>  	}
>>  	tree->rbroot = RB_ROOT;
>> +	INIT_LIST_HEAD(&tree->lru);
>>  	spin_unlock(&tree->lock);
>>  }
>>  
>> @@ -543,6 +941,7 @@ static void zswap_frontswap_init(unsigned type)
>>  	if (!tree->pool)
>>  		goto freetree;
>>  	tree->rbroot = RB_ROOT;
>> +	INIT_LIST_HEAD(&tree->lru);
>>  	spin_lock_init(&tree->lock);
>>  	zswap_trees[type] = tree;
>>  	return;
>> @@ -578,20 +977,32 @@ static int __init zswap_debugfs_init(void)
>>  	if (!zswap_debugfs_root)
>>  		return -ENOMEM;
>>  
>> +	debugfs_create_u64("saved_by_flush", S_IRUGO,
>> +			zswap_debugfs_root, &zswap_saved_by_flush);
>>  	debugfs_create_u64("pool_limit_hit", S_IRUGO,
>>  			zswap_debugfs_root, &zswap_pool_limit_hit);
>> +	debugfs_create_u64("reject_flush_attempted", S_IRUGO,
>> +			zswap_debugfs_root, &zswap_flush_attempted);
>> +	debugfs_create_u64("reject_tmppage_fail", S_IRUGO,
>> +			zswap_debugfs_root, &zswap_reject_tmppage_fail);
>> +	debugfs_create_u64("reject_flush_fail", S_IRUGO,
>> +			zswap_debugfs_root, &zswap_reject_flush_fail);
>>  	debugfs_create_u64("reject_zsmalloc_fail", S_IRUGO,
>>  			zswap_debugfs_root, &zswap_reject_zsmalloc_fail);
>>  	debugfs_create_u64("reject_kmemcache_fail", S_IRUGO,
>>  			zswap_debugfs_root, &zswap_reject_kmemcache_fail);
>>  	debugfs_create_u64("reject_compress_poor", S_IRUGO,
>>  			zswap_debugfs_root, &zswap_reject_compress_poor);
>> +	debugfs_create_u64("flushed_pages", S_IRUGO,
>> +			zswap_debugfs_root, &zswap_flushed_pages);
>>  	debugfs_create_u64("duplicate_entry", S_IRUGO,
>>  			zswap_debugfs_root, &zswap_duplicate_entry);
>>  	debugfs_create_atomic_t("pool_pages", S_IRUGO,
>>  			zswap_debugfs_root, &zswap_pool_pages);
>>  	debugfs_create_atomic_t("stored_pages", S_IRUGO,
>>  			zswap_debugfs_root, &zswap_stored_pages);
>> +	debugfs_create_atomic_t("outstanding_flushes", S_IRUGO,
>> +			zswap_debugfs_root, &zswap_outstanding_flushes);
>>  
>>  	return 0;
>>  }
>> @@ -627,6 +1038,10 @@ static int __init init_zswap(void)
>>  		pr_err("zswap: page pool initialization failed\n");
>>  		goto pagepoolfail;
>>  	}
>> +	if (zswap_tmppage_pool_create()) {
>> +		pr_err("zswap: workmem pool initialization failed\n");
>> +		goto tmppoolfail;
>> +	}
>>  	if (zswap_comp_init()) {
>>  		pr_err("zswap: compressor initialization failed\n");
>>  		goto compfail;
>> @@ -642,6 +1057,8 @@ static int __init init_zswap(void)
>>  pcpufail:
>>  	zswap_comp_exit();
>>  compfail:
>> +	zswap_tmppage_pool_destroy();
>> +tmppoolfail:
>>  	zswap_page_pool_destroy();
>>  pagepoolfail:
>>  	zswap_entry_cache_destory();
>> -- 
>> 1.8.1.1
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCHv4 2/7] zsmalloc: promote to lib/
  2013-01-29 22:51   ` Andrew Morton
  2013-01-30 16:28     ` Seth Jennings
@ 2013-02-13 16:00     ` Seth Jennings
  1 sibling, 0 replies; 34+ messages in thread
From: Seth Jennings @ 2013-02-13 16:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Greg Kroah-Hartman, Nitin Gupta, Minchan Kim,
	Konrad Rzeszutek Wilk, Dan Magenheimer, Robert Jennings,
	Jenifer Hopper, Mel Gorman, Johannes Weiner, Rik van Riel,
	Larry Woodman, Benjamin Herrenschmidt, Dave Hansen, linux-mm,
	linux-kernel, devel

On 01/29/2013 04:51 PM, Andrew Morton wrote:
> On Tue, 29 Jan 2013 15:40:22 -0600
> Seth Jennings <sjenning@linux.vnet.ibm.com> wrote:
> 
>> This patch promotes the slab-based zsmalloc memory allocator
>> from the staging tree to lib/
> 
> Hate to rain on the parade, but...  we haven't reviewed zsmalloc
> yet.  At least, I haven't, and I haven't seen others do so.
> 
> So how's about we forget that zsmalloc was previously in staging and
> send the zsmalloc code out for review?  With a very good changelog
> explaining why it exists, what problems it solves, etc.
> 
> 
> I peeked.
> 
> Don't answer any of the below questions - they are examples of
> concepts which should be accessible to readers of the
> hopefully-forthcoming very-good-changelog.

I know you just said "don't answer", but I wanted to say why certain
points aren't included in the new patchset I'm about to post later today.

> 
> - kmap_atomic() returns a void* - there's no need to cast its return value.

In places where we do pointer arithmetic, we do the cast to avoid
incrementing a void *, which is acceptable for gcc, but not everyone.

> 
> - Remove private MAX(), use the (much better implemented) max().

We can't use max() or max_t() because ZS_SIZE_CLASSES, which uses
ZS_MIN_ALLOC_SIZE, is used to define and array size in struct zs_pool.
So the expression must be able to be fully evaluated by the precompiler.

> 
> - It says "This was one of the major issues with its predecessor
>   (xvmalloc)", but drivers/staging/ramster/xvmalloc.c is still in the tree.

Yes, this was a little mess.  I think you'll find that isn't the case
anymore.  Dan has removed those files from zcache/ramster.

> 
> - USE_PGTABLE_MAPPING should be done via Kconfig.

Minchan did this work and will be included in the next version of the
patchset.

I've added more documentation/comments in the new patchset.  Hopefully,
it will help with understanding the more complicated parts of the code.

Thanks,
Seth


^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2013-02-13 16:01 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-29 21:40 [PATCHv4 0/7] zswap: compressed swap caching Seth Jennings
2013-01-29 21:40 ` [PATCHv4 1/7] debugfs: add get/set for atomic types Seth Jennings
2013-01-29 21:40 ` [PATCHv4 2/7] zsmalloc: promote to lib/ Seth Jennings
2013-01-29 22:51   ` Andrew Morton
2013-01-30 16:28     ` Seth Jennings
2013-01-30 23:34       ` Andrew Morton
2013-01-31  5:35       ` Minchan Kim
2013-02-13 16:00     ` Seth Jennings
2013-01-29 21:40 ` [PATCHv4 3/7] zswap: add to mm/ Seth Jennings
2013-01-31  7:07   ` Minchan Kim
2013-01-31 19:06     ` Seth Jennings
2013-01-31 20:07       ` Robert Jennings
2013-02-01  2:38       ` Minchan Kim
2013-02-01 15:31         ` Seth Jennings
2013-02-01 17:46           ` Seth Jennings
2013-01-29 21:40 ` [PATCHv4 4/7] mm: break up swap_writepage() for frontswap backends Seth Jennings
2013-01-29 21:40 ` [PATCHv4 5/7] mm: allow for outstanding swap writeback accounting Seth Jennings
2013-01-29 21:40 ` [PATCHv4 6/7] zswap: add flushing support Seth Jennings
2013-01-29 23:03   ` Andrew Morton
2013-02-01  7:27   ` Minchan Kim
2013-02-13  6:24     ` Seth Jennings
2013-01-29 21:40 ` [PATCHv4 7/7] zswap: add documentation Seth Jennings
2013-01-29 23:07   ` Andrew Morton
2013-01-29 22:14 ` [PATCHv4 0/7] zswap: compressed swap caching Joe Perches
2013-01-29 22:49   ` Seth Jennings
2013-01-30  4:32     ` Minchan Kim
2013-01-30 16:01       ` Seth Jennings
2013-02-01  1:39 ` Simon Jeons
2013-02-01 15:13   ` Seth Jennings
2013-02-03  0:17     ` Simon Jeons
2013-02-04 14:56       ` Seth Jennings
2013-02-04  1:03     ` Simon Jeons
2013-02-04 15:07       ` Seth Jennings
     [not found] ` <5110287A.5050200@linux.vnet.ibm.com>
2013-02-04 21:45   ` Seth Jennings

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).